Sample records for complex problem due

  1. SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES

    EPA Science Inventory

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems, due to the complex nature of the problems, the need for complex assessments, and complicated ...

  2. A SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES

    EPA Science Inventory

    Evaluation and analysis of multiple objectives are very important in designing environmentally benign processes. They require a systematic procedure for solving multi-objective decision-making problems due to the complex nature of the problems and the need for complex assessment....

  3. Factors Affecting Police Officers' Acceptance of GIS Technologies: A Study of the Turkish National Police

    ERIC Educational Resources Information Center

    Cakar, Bekir

    2011-01-01

    The situations and problems that police officers face are more complex in today's society, due in part to the increase of technology and growing complexity of globalization. Accordingly, to solve these problems and deal with the complexities, law enforcement organizations develop and apply new techniques and methods such as geographic information…

  4. GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie

    2015-03-31

    Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.

  5. GRADIENT: Graph Analytic Approach for Discovering Irregular Events, Nascent and Temporal

    ScienceCinema

    Hogan, Emilie

    2018-01-16

    Finding a time-ordered signature within large graphs is a computationally complex problem due to the combinatorial explosion of potential patterns. GRADIENT is designed to search and understand that problem space.

  6. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  7. Students' conceptual performance on synthesis physics problems with varying mathematical complexity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-06-01

    A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.

  8. Analogy as a strategy for supporting complex problem solving under uncertainty.

    PubMed

    Chan, Joel; Paletz, Susannah B F; Schunn, Christian D

    2012-11-01

    Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.

  9. DUII control system performance measures for Oregon counties 1991-2001

    DOT National Transportation Integrated Search

    2002-06-01

    Driving Under the Influence of Intoxicants (DUII) is a complex social problem that has origins in both internal and external system factors. Due to its complexity, Oregon communities and involved agencies must concentrate on addressing the negative r...

  10. Application of higher-order cepstral techniques in problems of fetal heart signal extraction

    NASA Astrophysics Data System (ADS)

    Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.

    1996-10-01

    Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.

  11. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  12. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  13. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  14. A Correlational Study Assessing the Relationships among Information Technology Project Complexity, Project Complication, and Project Success

    ERIC Educational Resources Information Center

    Williamson, David J.

    2011-01-01

    The specific problem addressed in this study was the low success rate of information technology (IT) projects in the U.S. Due to the abstract nature and inherent complexity of software development, IT projects are among the most complex projects encountered. Most existing schools of project management theory are based on the rational systems…

  15. How students process equations in solving quantitative synthesis problems? Role of mathematical complexity in students' mathematical performance

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-12-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Philip LaRoche

    At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers aremore » of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.« less

  17. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  18. Adapting Scale for Children: A Practical Model for Researchers

    ERIC Educational Resources Information Center

    Aydin, Selami; Harputlu, Leyla; Çelik, Seyda Savran; Ustuk, Özgehan; Güzel, Serhat; Genç, Deniz

    2016-01-01

    Measurement of children's behaviors in an educational and research context is a problematic and complex area. It is also evident that adapting scales to measure children's behaviors in an educational and research context is a complex process due to several reasons. First, cultural elements constitute a considerable problem. Second, it is difficult…

  19. Complex Word Reading in Dutch Deaf Children and Adults

    ERIC Educational Resources Information Center

    van Hoogmoed, Anne H.; Knoors, Harry; Schreuder, Robert; Verhoeven, Ludo

    2013-01-01

    Children who are deaf are often delayed in reading comprehension. This delay could be due to problems in morphological processing during word reading. In this study, we investigated whether 6th grade deaf children and adults are delayed in comparison to their hearing peers in reading complex derivational words and compounds compared to…

  20. Manufacturing DTaP-based combination vaccines: industrial challenges around essential public health tools.

    PubMed

    Vidor, Emmanuel; Soubeyrand, Benoit

    2016-12-01

    The manufacture of DTP-backboned combination vaccines is complex, and vaccine quality is evaluated by both batch composition and conformance of manufacturing history. Since their first availability, both the manufacturing regulations for DTP combination vaccines and their demand have evolved significantly. This has resulted in a constant need to modify manufacturing and quality control processes. Areas covered: Regulations that govern the manufacture of complex vaccines can be inconsistent between countries and need to be aligned with the regulatory requirements that apply in all countries of distribution. Changes in product mix and quantities can lead to uncertainty in vaccine supply maintenance. These problems are discussed in the context of the importance of these products as essential public health tools. Expert commentary: Increasing demand for complex vaccines globally has led to problems in supply due to intrinsically complex manufacturing and regulatory procedures. Vaccine manufacturers are fully engaged in the resolution of these challenges, but currently changes in demand need ideally to be anticipated approximately 3 years in advance due to long production cycle times.

  1. Transfer of Algebraic and Graphical Thinking between Mathematics and Chemistry

    ERIC Educational Resources Information Center

    Potgieter, Marietjie; Harding, Ansie; Engelbrecht, Johann

    2008-01-01

    Students in undergraduate chemistry courses find, as a rule, topics with a strong mathematical basis difficult to master. In this study we investigate whether such mathematically related problems are due to deficiencies in their mathematics foundation or due to the complexity introduced by transfer of mathematics to a new scientific domain. In the…

  2. Recurrent mycobacterial osteomyelitis. Report of a case due to Mycobacterium avium-intracellulare-scrofulaceum complex and BCG vaccination.

    PubMed

    Solheim, L F; Kjelsberg, F

    1982-01-01

    A 28-year-old man suffering from recurrent mycobacterial osteomyelitis during several years is reported. Eight years old he had a Mycobacterium scrofulaceum infection in his right calcaneus. A serious infection with multiple foci of osteomyelitis occurred after BCG vaccination at the age of 14 years and 11 years later multifocal lesions of osteomyelitis due to Mycobacterium avium-intracellulare-scrofulaceum complex appeared. The special clinical problems due to the relative or complete resistence of these organisms to antituberculous drugs are emphasized. The mainstays of treatment are surgical revision and drainage with prolonged and intensive multiple drug therapy.

  3. Designs for Operationalizing Collaborative Problem Solving for Automated Assessment

    ERIC Educational Resources Information Center

    Scoular, Claire; Care, Esther; Hesse, Friedrich W.

    2017-01-01

    Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…

  4. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  5. How are things adding up? Neural differences between arithmetic operations are due to general problem solving strategies.

    PubMed

    Tschentscher, Nadja; Hauk, Olaf

    2014-05-15

    A number of previous studies have interpreted differences in brain activation between arithmetic operation types (e.g. addition and multiplication) as evidence in favor of distinct cortical representations, processes or neural systems. It is still not clear how differences in general task complexity contribute to these neural differences. Here, we used a mental arithmetic paradigm to disentangle brain areas related to general problem solving from those involved in operation type specific processes (addition versus multiplication). We orthogonally varied operation type and complexity. Importantly, complexity was defined not only based on surface criteria (for example number size), but also on the basis of individual participants' strategy ratings, which were validated in a detailed behavioral analysis. We replicated previously reported operation type effects in our analyses based on surface criteria. However, these effects vanished when controlling for individual strategies. Instead, procedural strategies contrasted with memory retrieval reliably activated fronto-parietal and motor regions, while retrieval strategies activated parietal cortices. This challenges views that operation types rely on partially different neural systems, and suggests that previously reported differences between operation types may have emerged due to invalid measures of complexity. We conclude that mental arithmetic is a powerful paradigm to study brain networks of abstract problem solving, as long as individual participants' strategies are taken into account. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  7. Next gen perception and cognition: augmenting perception and enhancing cognition through mobile technologies

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.

    2015-03-01

    In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution - namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that `thinking outside of brain' - as in solving engineering problems that are expressed in a virtual media due to their complexity - would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the `running out of time' issue) and remove the communication of results barrier (addressing the human `perception/expression imbalance' issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.

  8. Analysis of Change in the Wind Speed Ratio according to Apartment Layout and Solutions

    PubMed Central

    Hyung, Won-gil; Kim, Young-Moon; You, Ki-Pyo

    2014-01-01

    Apartment complexes in various forms are built in downtown areas. The arrangement of an apartment complex has great influence on the wind flow inside it. There are issues of residents' walking due to gust occurrence within apartment complexes, problems with pollutant emission due to airflow congestion, and heat island and cool island phenomena in apartment complexes. Currently, the forms of internal arrangements of apartment complexes are divided into the flat type and the tower type. In the present study, a wind tunnel experiment and computational fluid dynamics (CFD) simulation were performed with respect to internal wind flows in different apartment arrangement forms. Findings of the wind tunnel experiment showed that the internal form and arrangement of an apartment complex had significant influence on its internal airflow. The wind velocity of the buildings increased by 80% at maximum due to the proximity effects between the buildings. The CFD simulation for relaxing such wind flows indicated that the wind velocity reduced by 40% or more at maximum when the paths between the lateral sides of the buildings were extended. PMID:24688430

  9. Analysis of change in the wind speed ratio according to apartment layout and solutions.

    PubMed

    Hyung, Won-gil; Kim, Young-Moon; You, Ki-Pyo

    2014-01-01

    Apartment complexes in various forms are built in downtown areas. The arrangement of an apartment complex has great influence on the wind flow inside it. There are issues of residents' walking due to gust occurrence within apartment complexes, problems with pollutant emission due to airflow congestion, and heat island and cool island phenomena in apartment complexes. Currently, the forms of internal arrangements of apartment complexes are divided into the flat type and the tower type. In the present study, a wind tunnel experiment and computational fluid dynamics (CFD) simulation were performed with respect to internal wind flows in different apartment arrangement forms. Findings of the wind tunnel experiment showed that the internal form and arrangement of an apartment complex had significant influence on its internal airflow. The wind velocity of the buildings increased by 80% at maximum due to the proximity effects between the buildings. The CFD simulation for relaxing such wind flows indicated that the wind velocity reduced by 40% or more at maximum when the paths between the lateral sides of the buildings were extended.

  10. Systemic Engagement: Universities as Partners in Systemic Approaches to Community Change

    ERIC Educational Resources Information Center

    McNall, Miles A.; Barnes-Najor, Jessica V.; Brown, Robert E.; Doberneck, Diane M.; Fitzgerald, Hiram E.

    2015-01-01

    The most pressing social problems facing humanity in the 21st century are what systems theorist Russell Ackoff referred to as "messes"--complex dynamic systems of problems that interact and reinforce each other over time. In this article, the authors argue that the lack of progress in managing messes is in part due to the predominance of…

  11. Multi-dimensional tunnelling and complex momentum

    NASA Technical Reports Server (NTRS)

    Bowcock, Peter; Gregory, Ruth

    1991-01-01

    The problem of modeling tunneling phenomena in more than one dimension is examined. It is found that existing techniques are inadequate in a wide class of situations, due to their inability to deal with concurrent classical motion. The generalization of these methods to allow for complex momenta is shown, and improved techniques are demonstrated with a selection of illustrative examples. Possible applications are presented.

  12. Robonaut 2 and You: Specifying and Executing Complex Operations

    NASA Technical Reports Server (NTRS)

    Baker, William; Kingston, Zachary; Moll, Mark; Badger, Julia; Kavraki, Lydia

    2017-01-01

    Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.

  13. A DEVELOPMENTAL STUDY OF THE RELATIONSHIP BETWEEN REACTION-TIME AND PROBLEM-SOLVING EFFICIENCY. FINAL REPORT.

    ERIC Educational Resources Information Center

    FRIEDMAN, STANLEY R.

    MANY STUDIES HAVE INDICATED THE PRESENCE OF A SLUMP OR INVERSION IN THE PROBLEM-SOLVING EFFICIENCY OF CHILDREN AT THE FOURTH GRADE LEVEL. IT HAS BEEN SUGGESTED THAT THIS MAY BE DUE TO THE INTERFERING EFFECT OF THE FORMATION OF COMPLEX HYPOTHESES BY THE CHILDREN. SINCE A TENDENCY TO RESPOND RAPIDLY WOULD PRESUMABLY INHIBIT THE FORMATION OF COMPLEX…

  14. Early stage response problem for post-disaster incidents

    NASA Astrophysics Data System (ADS)

    Kim, Sungwoo; Shin, Youngchul; Lee, Gyu M.; Moon, Ilkyeong

    2018-07-01

    Research on evacuation plans for reducing damages and casualties has been conducted to advise defenders against threats. However, despite the attention given to the research in the past, emergency response management, designed to neutralize hazards, has been undermined since planners frequently fail to apprehend the complexities and contexts of the emergency situation. Therefore, this study considers a response problem with unique characteristics for the duration of the emergency. An early stage response problem is identified to find the optimal routing and scheduling plan for responders to prevent further hazards. Due to the complexity of the proposed mathematical model, two algorithms are developed. Data from a high-rise building, called Central City in Seoul, Korea, are used to evaluate the algorithms. Results show that the proposed algorithms can procure near-optimal solutions within a reasonable time.

  15. Boosting-Based Optimization as a Generic Framework for Novelty and Fraud Detection in Complex Strategies

    NASA Astrophysics Data System (ADS)

    Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria

    2008-11-01

    Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.

  16. Mitochondrial disease associated with complex I (NADH-CoQ oxidoreductase) deficiency.

    PubMed

    Scheffler, Immo E

    2015-05-01

    Mitochondrial diseases due to a reduced capacity for oxidative phosphorylation were first identified more than 20 years ago, and their incidence is now recognized to be quite significant. In a large proportion of cases the problem can be traced to a complex I (NADH-CoQ oxidoreductase) deficiency (Phenotype MIM #252010). Because the complex consists of 44 subunits, there are many potential targets for pathogenic mutations, both on the nuclear and mitochondrial genomes. Surprisingly, however, almost half of the complex I deficiencies are due to defects in as yet unidentified genes that encode proteins other than the structural proteins of the complex. This review attempts to summarize what we know about the molecular basis of complex I deficiencies: mutations in the known structural genes, and mutations in an increasing number of genes encoding "assembly factors", that is, proteins required for the biogenesis of a functional complex I that are not found in the final complex I. More such genes must be identified before definitive genetic counselling can be applied in all cases of affected families.

  17. Deficits in comprehending wh-questions in children with hearing loss - the contribution of phonological short-term memory and syntactic complexity.

    PubMed

    Penke, Martina; Wimmer, Eva

    2018-01-01

    The aim of the study is to investigate if German children with hearing loss (HL) display persisting problems in comprehending complex sentences and to find out whether these problems can be linked to limitations in phonological short-term memory (PSTM). A who-question comprehension test (picture pointing) and a nonword repetition (NWR) task were conducted with 21 German children with bilateral sensorineural HL (ages 3-4) and with age-matched 19 normal hearing (NH) children. Follow-up data (ages 6-8) are reported for 10 of the children with HL. The data reveal that the comprehension of who-questions as well as PSTM was significantly more impaired in children with HL than in children with NH. For both groups of participants, there were no correlations between question comprehension scores and performance in the NWR test. Syntactic complexity (subject vs. object question) affected question comprehension in children with HL, however, these problems were overcome at school age. In conclusion, the data indicate that a hearing loss affects the comprehension of complex sentences. The observed problems did, however, not persist and were, therefore, unlikely to be caused by a genuine syntactic deficit. For the tested wh-questions, there is no indication that syntactic comprehension problems of children with HL are due to limitations in PSTM.

  18. Detection of Oil in Water Column, Final Report: Detection Prototype Tests

    DTIC Science & Technology

    2014-07-01

    first phase of the project involved initial development and testing of three technologies to address the detection problem . This second phase...important oceanic phenomena such as density stratification and naturally occurring particulate matter, which will affect the performance of sensors in the ...2 UNCLAS//Public | CG-926 RDC | M. Fitzpatrick, et al.| Public July 2014 spills of submerged oil is far more complex due to the problems

  19. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  20. Distributed mixed-integer fuzzy hierarchical programming for municipal solid waste management. Part I: System identification and methodology development.

    PubMed

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei

    2017-03-01

    Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.

  1. Mesoscale modeling: solving complex flows in biology and biotechnology.

    PubMed

    Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander

    2013-07-01

    Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. The problem of complex eigensystems in the semianalytical solution for advancement of time in solute transport simulations: a new method using real arithmetic

    USGS Publications Warehouse

    Umari, Amjad M.J.; Gorelick, Steven M.

    1986-01-01

    In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.

  3. Typification and taxonomic status re-evaluation of 15 taxon names within the species complex Cymbella affinis/tumidula/turgidula (Cymbellaceae, Bacillariophyta)

    PubMed Central

    da Silva, Weliton José; Jahn, Regine; Ludwig, Thelma Alvim Veiga; Hinz, Friedel; Menezes, Mariângela

    2015-01-01

    Abstract Specimens belonging to the Cymbella affinis / Cymbella tumidula / Cymbella turgidula species complex have many taxonomic problems, due to their high morphological variability and lack of type designations. Fifteen taxon names of this complex, distributed in five species, were re-evaluated concerning their taxonomic status, and lectotypified based on original material. In addition to light microscopy, some material was analyzed by electron microscopy. Four new combinations are proposed in order to reposition infraspecific taxa. PMID:26312038

  4. Developing a new stochastic competitive model regarding inventory and price

    NASA Astrophysics Data System (ADS)

    Rashid, Reza; Bozorgi-Amiri, Ali; Seyedhoseini, S. M.

    2015-09-01

    Within the competition in today's business environment, the design of supply chains becomes more complex than before. This paper deals with the retailer's location problem when customers choose their vendors, and inventory costs have been considered for retailers. In a competitive location problem, price and location of facilities affect demands of customers; consequently, simultaneous optimization of the location and inventory system is needed. To prepare a realistic model, demand and lead time have been assumed as stochastic parameters, and queuing theory has been used to develop a comprehensive mathematical model. Due to complexity of the problem, a branch and bound algorithm has been developed, and its performance has been validated in several numerical examples, which indicated effectiveness of the algorithm. Also, a real case has been prepared to demonstrate performance of the model for real world.

  5. Acquisition and performance of a problem-solving skill.

    NASA Technical Reports Server (NTRS)

    Morgan, B. B., Jr.; Alluisi, E. A.

    1971-01-01

    The acquisition of skill in the performance of a three-phase code transformation task (3P-COTRAN) was studied with 20 subjects who solved 27 3P-COTRAN problems during each of 8 successive sessions. The purpose of the study was to determine the changes in the 3P-COTRAN factor structure resulting from practice, the distribution of practice-related gains in performance over the nine measures of the five 3P-COTRAN factors, and the effects of transformation complexities on the 3P-COTRAN performance of subjects. A significant performance gain due to practice was observed, with improvements in speed continuing even when accuracy reached asymptotic levels. Transformation complexity showed no effect on early performances but the 3- and 4-element transformations were solved quicker than the 5-element transformation in the problem-solving Phase III of later skilled performances.

  6. Artificial intelligence approaches to astronomical observation scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Miller, Glenn

    1988-01-01

    Automated scheduling will play an increasing role in future ground- and space-based observatory operations. Due to the complexity of the problem, artificial intelligence technology currently offers the greatest potential for the development of scheduling tools with sufficient power and flexibility to handle realistic scheduling situations. Summarized here are the main features of the observatory scheduling problem, how artificial intelligence (AI) techniques can be applied, and recent progress in AI scheduling for Hubble Space Telescope.

  7. Learning Object-Level and Meta-Level Knowledge in Expert Systems.

    DTIC Science & Technology

    1985-11-01

    usually a misdiagnosed one). 1.2.2. Efficiency Consideration Learning becomes a complicated issue in a complex domain like medicine where there may... misdiagnosed cases are often due to missing rules. Therefore, we would rather view this problem as a learning problem. A strategy called "retrospective...inspection after learning" is described in Chapter 5. With this strategy, rules that can make the misdiagnosed case diagnosed correctly are first found; then

  8. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  9. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  10. The anatomical problem posed by brain complexity and size: a potential solution.

    PubMed

    DeFelipe, Javier

    2015-01-01

    Over the years the field of neuroanatomy has evolved considerably but unraveling the extraordinary structural and functional complexity of the brain seems to be an unattainable goal, partly due to the fact that it is only possible to obtain an imprecise connection matrix of the brain. The reasons why reaching such a goal appears almost impossible to date is discussed here, together with suggestions of how we could overcome this anatomical problem by establishing new methodologies to study the brain and by promoting interdisciplinary collaboration. Generating a realistic computational model seems to be the solution rather than attempting to fully reconstruct the whole brain or a particular brain region.

  11. Bridge Condition Assessment Using D Numbers

    PubMed Central

    Hu, Yong

    2014-01-01

    Bridge condition assessment is a complex problem influenced by many factors. The uncertain environment increases more its complexity. Due to the uncertainty in the process of assessment, one of the key problems is the representation of assessment results. Though there exists many methods that can deal with uncertain information, however, they have more or less deficiencies. In this paper, a new representation of uncertain information, called D numbers, is presented. It extends the Dempster-Shafer theory. By using D numbers, a new method is developed for the bridge condition assessment. Compared to these existing methods, the proposed method is simpler and more effective. An illustrative case is given to show the effectiveness of the new method. PMID:24696639

  12. Viagra for Women: Why Doesn't It Exist?

    MedlinePlus

    ... women. A daily pill, Addyi may boost sex drive in women with low sexual desire and who find the experience distressing. Potentially ... don't notice an improvement in your sex drive after eight weeks. Female sexual response is complex. Sexual problems may be due ...

  13. Using mobile probes to inform and measure the effectiveness of traffic control strategies on urban networks.

    DOT National Transportation Integrated Search

    2015-07-01

    Urban traffic congestion is a problem that plagues many cities in the United States. Testing strategies to alleviate this : congestion is especially challenging due to the difficulty of modeling complex urban traffic networks. However, recent work ha...

  14. Protection against hostile algorithms in UNIX software

    NASA Astrophysics Data System (ADS)

    Radatti, Peter V.

    1996-03-01

    Protection against hostile algorithms contained in Unix software is a growing concern without easy answers. Traditional methods used against similar attacks in other operating system environments such as MS-DOS or Macintosh are insufficient in the more complex environment provided by Unix. Additionally, Unix provides a special and significant problem in this regard due to its open and heterogeneous nature. These problems are expected to become both more common and pronounced as 32 bit multiprocess network operating systems become popular. Therefore, the problems experienced today are a good indicator of the problems and the solutions that will be experienced in the future, no matter which operating system becomes predominate.

  15. Data based identification and prediction of nonlinear and complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    The problem of reconstructing nonlinear and complex dynamical systems from measured data or time series is central to many scientific disciplines including physical, biological, computer, and social sciences, as well as engineering and economics. The classic approach to phase-space reconstruction through the methodology of delay-coordinate embedding has been practiced for more than three decades, but the paradigm is effective mostly for low-dimensional dynamical systems. Often, the methodology yields only a topological correspondence of the original system. There are situations in various fields of science and engineering where the systems of interest are complex and high dimensional with many interacting components. A complex system typically exhibits a rich variety of collective dynamics, and it is of great interest to be able to detect, classify, understand, predict, and control the dynamics using data that are becoming increasingly accessible due to the advances of modern information technology. To accomplish these goals, especially prediction and control, an accurate reconstruction of the original system is required. Nonlinear and complex systems identification aims at inferring, from data, the mathematical equations that govern the dynamical evolution and the complex interaction patterns, or topology, among the various components of the system. With successful reconstruction of the system equations and the connecting topology, it may be possible to address challenging and significant problems such as identification of causal relations among the interacting components and detection of hidden nodes. The "inverse" problem thus presents a grand challenge, requiring new paradigms beyond the traditional delay-coordinate embedding methodology. The past fifteen years have witnessed rapid development of contemporary complex graph theory with broad applications in interdisciplinary science and engineering. The combination of graph, information, and nonlinear dynamical systems theories with tools from statistical physics, optimization, engineering control, applied mathematics, and scientific computing enables the development of a number of paradigms to address the problem of nonlinear and complex systems reconstruction. In this Review, we describe the recent advances in this forefront and rapidly evolving field, with a focus on compressive sensing based methods. In particular, compressive sensing is a paradigm developed in recent years in applied mathematics, electrical engineering, and nonlinear physics to reconstruct sparse signals using only limited data. It has broad applications ranging from image compression/reconstruction to the analysis of large-scale sensor networks, and it has become a powerful technique to obtain high-fidelity signals for applications where sufficient observations are not available. We will describe in detail how compressive sensing can be exploited to address a diverse array of problems in data based reconstruction of nonlinear and complex networked systems. The problems include identification of chaotic systems and prediction of catastrophic bifurcations, forecasting future attractors of time-varying nonlinear systems, reconstruction of complex networks with oscillatory and evolutionary game dynamics, detection of hidden nodes, identification of chaotic elements in neuronal networks, reconstruction of complex geospatial networks and nodal positioning, and reconstruction of complex spreading networks with binary data.. A number of alternative methods, such as those based on system response to external driving, synchronization, and noise-induced dynamical correlation, will also be discussed. Due to the high relevance of network reconstruction to biological sciences, a special section is devoted to a brief survey of the current methods to infer biological networks. Finally, a number of open problems including control and controllability of complex nonlinear dynamical networks are discussed. The methods outlined in this Review are principled on various concepts in complexity science and engineering such as phase transitions, bifurcations, stabilities, and robustness. The methodologies have the potential to significantly improve our ability to understand a variety of complex dynamical systems ranging from gene regulatory systems to social networks toward the ultimate goal of controlling such systems.

  16. Using mobile probes to inform and measure the effectiveness of macroscopic traffic control strategies on urban networks.

    DOT National Transportation Integrated Search

    2015-06-01

    Urban traffic congestion is a problem that plagues many cities in the United States. Testing strategies to alleviate this : congestion is especially challenging due to the difficulty of modeling complex urban traffic networks. However, recent work ha...

  17. Active Control of Generalized Complex Modal Structures in a Stochastic Environment

    DTIC Science & Technology

    1992-05-15

    began with the design of a baseline controller. The system of interest was a MIMO, heavily damped structure with complex modes, and the control objective...feed-through term in our system that was due to the use of accelerometers as sensors. This provided an acceptable baseline solution to our I problem...to which we could compare our ideas for improvement. One area in which the baseline design was deficient was robust stability to unstructured

  18. A review on recent contribution of meshfree methods to structure and fracture mechanics applications.

    PubMed

    Daxini, S D; Prajapati, J M

    2014-01-01

    Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.

  19. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Thomas W.; Quach, Tu-Thach; Detry, Richard Joseph

    Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline ofmore » CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.« less

  1. Perspective: Quantum mechanical methods in biochemistry and biophysics.

    PubMed

    Cui, Qiang

    2016-10-14

    In this perspective article, I discuss several research topics relevant to quantum mechanical (QM) methods in biophysical and biochemical applications. Due to the immense complexity of biological problems, the key is to develop methods that are able to strike the proper balance of computational efficiency and accuracy for the problem of interest. Therefore, in addition to the development of novel ab initio and density functional theory based QM methods for the study of reactive events that involve complex motifs such as transition metal clusters in metalloenzymes, it is equally important to develop inexpensive QM methods and advanced classical or quantal force fields to describe different physicochemical properties of biomolecules and their behaviors in complex environments. Maintaining a solid connection of these more approximate methods with rigorous QM methods is essential to their transferability and robustness. Comparison to diverse experimental observables helps validate computational models and mechanistic hypotheses as well as driving further development of computational methodologies.

  2. Ending School Avoidance

    ERIC Educational Resources Information Center

    Casoli-Reardon, Michele; Rappaport, Nancy; Kulick, Deborah; Reinfeld, Sarah

    2012-01-01

    School truancy--defined by a student's refusal to attend part or all of the school day, along with a defined number of unexcused absences--is an increasingly frustrating and complex problem for teachers and school administrators. Although statistics on the prevalence of truancy in the United States do not exist due to lack of uniformity among…

  3. Irrational Beliefs and Abuse in University Students' Romantic Relations

    ERIC Educational Resources Information Center

    Kaygusuz, Canani

    2013-01-01

    Problem Statement: The complex nature of romantic relationships, in general, makes the continuation of these relationships a challenge. This situation is even more problematic in traditional societies, as social norms for these relations are more strict and more disciplinarian. University students want to be in romantic relationships due to their…

  4. Unidimensional Interpretations for Multidimensional Test Items

    ERIC Educational Resources Information Center

    Kahraman, Nilufer

    2013-01-01

    This article considers potential problems that can arise in estimating a unidimensional item response theory (IRT) model when some test items are multidimensional (i.e., show a complex factorial structure). More specifically, this study examines (1) the consequences of model misfit on IRT item parameter estimates due to unintended minor item-level…

  5. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  6. A cornerstone of healthy aging: do we need to rethink the concept of adherence in the elderly?

    PubMed

    Giardini, Anna; Maffoni, Marina; Kardas, Przemyslaw; Costa, Elisio

    2018-01-01

    Worldwide, the population is aging and this trend will increase in the future due to medical, technological and scientific advancements. To take care of the elderly is highly demanding and challenging for the health care system due to their frequent condition of chronicity, multimorbidity and the consequent complex management of polypharmacy. Nonadherence to medications and to medical plans is a well-recognized public health problem and a very urgent issue in this population. For this reason, some considerations to identify a new shared approach to integrated care of older people are described. The concept of adherence should be considered as a complex and continuous process where family, caregivers and patients' beliefs come into play. Moreover, a new culture of adherence should contemplate the complexity of multimorbidity, as well as the necessity to renegotiate the medication regimen on the basis of each patient's needs.

  7. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  8. Comparison of PASCAL and FORTRAN for solving problems in the physical sciences

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1981-01-01

    The paper compares PASCAL and FORTRAN for problem solving in the physical sciences, due to requests NASA has received to make PASCAL available on the Numerical Aerodynamic Simulator (scheduled to be operational in 1986). PASCAL disadvantages include the lack of scientific utility procedures equivalent to the IBM scientific subroutine package or the IMSL package which are available in FORTRAN. Advantages include a well-organized, easy to read and maintain writing code, range checking to prevent errors, and a broad selection of data types. It is concluded that FORTRAN may be the better language, although ADA (patterned after PASCAL) may surpass FORTRAN due to its ability to add complex and vector math, and the specify the precision and range of variables.

  9. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  10. Measurement of Cruelty in Children: The Cruelty to Animals Inventory

    ERIC Educational Resources Information Center

    Dadds, Mark R.; Whiting, Clare; Bunn, Paul; Fraser, Jennifer A.; Charlson, Juliana H.; Pirola-Merlo, Andrew

    2004-01-01

    Cruelty to animals may be a particularly pernicious aspect of problematic child development. Progress in understanding the development of the problem is limited due to the complex nature of cruelty as a construct, and limitations with current assessment measures. The Children and Animals Inventory (CAI) was developed as a brief self- and…

  11. 75 FR 3745 - NIH Consensus Development Conference on Vaginal Birth After Cesarean: New Insights; Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... clinicians believed that all of her future pregnancies required delivery by cesarean as well. However, in... placental problems in future pregnancies. Other important considerations that may influence decisionmaking... the pregnancy is relative to her due date, and the size and position of her baby. Given the complexity...

  12. Mixed Methods Research: What Are the Key Issues to Consider?

    ERIC Educational Resources Information Center

    Ghosh, Rajashi

    2016-01-01

    Mixed methods research (MMR) is increasingly becoming a popular methodological approach in several fields due to the promise it holds for comprehensive understanding of complex problems being researched. However, researchers interested in MMR often lack reference to a guide that can explain the key issues pertaining to the paradigm wars…

  13. Semi-Automatic Methods of Knowledge Enhancement

    DTIC Science & Technology

    1988-12-05

    pL . Response was patchy. Apparently awed by the complexity of the problem only 3 GM’s responded and all asked for no public use to be made of their...by the SERC . Thanks are due to the Turing Institute and Edinburgh University Ai department for resource and facilities. We would also like to thank

  14. An Integer Programming-Based Generalized Vehicle Routing Approach for Printed Circuit Board Assembly Optimization

    ERIC Educational Resources Information Center

    Seth, Anupam

    2009-01-01

    Production planning and scheduling for printed circuit, board assembly has so far defied standard operations research approaches due to the size and complexity of the underlying problems, resulting in unexploited automation flexibility. In this thesis, the increasingly popular collect-and-place machine configuration is studied and the assembly…

  15. An active role for machine learning in drug development

    PubMed Central

    Murphy, Robert F.

    2014-01-01

    Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249

  16. Issues Related to Cleaning Complex Geometry Surfaces with ODC-Free Solvents

    NASA Technical Reports Server (NTRS)

    Bradford, Blake F.; Wurth, Laura A.; Nayate, Pramod D.; McCool, Alex (Technical Monitor)

    2001-01-01

    Implementing ozone depleting chemicals (ODC)-free solvents into full-scale reusable solid rocket motor cleaning operations has presented problems due to the low vapor pressures of the solvents. Because of slow evaporation, solvent retention is a problem on porous substrates or on surfaces with irregular geometry, such as threaded boltholes, leak check ports, and nozzle backfill joints. The new solvents are being evaluated to replace 1,1,1-trichloroethane, which readily evaporates from these surfaces. Selection of the solvents to be evaluated on full-scale hardware was made based on results of subscale tests performed with flat surface coupons, which did not manifest the problem. Test efforts have been undertaken to address concerns with the slow-evaporating solvents. These concerns include effects on materials due to long-term exposure to solvent, potential migration from bolthole threads to seal surfaces, and effects on bolt loading due to solvent retention in threads. Tests performed to date have verified that retained solvent does not affect materials or hardware performance. Process modifications have also been developed to assist drying, and these can be implemented if additional drying becomes necessary.

  17. Surface similarity-based molecular query-retrieval

    PubMed Central

    Singh, Rahul

    2007-01-01

    Background Discerning the similarity between molecules is a challenging problem in drug discovery as well as in molecular biology. The importance of this problem is due to the fact that the biochemical characteristics of a molecule are closely related to its structure. Therefore molecular similarity is a key notion in investigations targeting exploration of molecular structural space, query-retrieval in molecular databases, and structure-activity modelling. Determining molecular similarity is related to the choice of molecular representation. Currently, representations with high descriptive power and physical relevance like 3D surface-based descriptors are available. Information from such representations is both surface-based and volumetric. However, most techniques for determining molecular similarity tend to focus on idealized 2D graph-based descriptors due to the complexity that accompanies reasoning with more elaborate representations. Results This paper addresses the problem of determining similarity when molecules are described using complex surface-based representations. It proposes an intrinsic, spherical representation that systematically maps points on a molecular surface to points on a standard coordinate system (a sphere). Molecular surface properties such as shape, field strengths, and effects due to field super-positioningcan then be captured as distributions on the surface of the sphere. Surface-based molecular similarity is subsequently determined by computing the similarity of the surface-property distributions using a novel formulation of histogram-intersection. The similarity formulation is not only sensitive to the 3D distribution of the surface properties, but is also highly efficient to compute. Conclusion The proposed method obviates the computationally expensive step of molecular pose-optimisation, can incorporate conformational variations, and facilitates highly efficient determination of similarity by directly comparing molecular surfaces and surface-based properties. Retrieval performance, applications in structure-activity modeling of complex biological properties, and comparisons with existing research and commercial methods demonstrate the validity and effectiveness of the approach. PMID:17634096

  18. P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems

    NASA Technical Reports Server (NTRS)

    Kang, Kab S.

    2002-01-01

    The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning

  19. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  20. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  1. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    PubMed

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  2. Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.

    PubMed

    Conzelmann, Holger; Gilles, Ernst-Dieter

    2008-01-01

    Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.

  3. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  4. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  5. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  6. Using Networks to Visualize and Analyze Process Data for Educational Assessment

    ERIC Educational Resources Information Center

    Zhu, Mengxiao; Shu, Zhan; von Davier, Alina A.

    2016-01-01

    New technology enables interactive and adaptive scenario-based tasks (SBTs) to be adopted in educational measurement. At the same time, it is a challenging problem to build appropriate psychometric models to analyze data collected from these tasks, due to the complexity of the data. This study focuses on process data collected from SBTs. We…

  7. Teaching as Regulation and Dealing with Complexity

    ERIC Educational Resources Information Center

    Boshuizen, H. P. A.

    2016-01-01

    At an abstract level, teaching a class can be perceived as one big regulation problem. For an optimal result, teachers must continuously (re)align their goals and sub-goals, and need to get timely and valid information on how they are doing in reaching these goals. This discussion describes the specific difficulties due to the time characteristics…

  8. Collective Impact Approach: A "Tool" for Managing Complex Problems and Business Clusters Sustainability

    ERIC Educational Resources Information Center

    De Chiara, Alessandra

    2017-01-01

    Environmental pollution occurring in industrial districts represents a serious issue not only for local communities but also for those industrial productions that draw from the territory the source of their competitiveness. Due to its ability to take into account the needs of different stakeholders, the collective impact approach has the potential…

  9. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  10. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  11. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-05-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.

  12. A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation

    PubMed Central

    Tahmasebi, Pejman; Hezarkhani, Ardeshir

    2012-01-01

    The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468

  13. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology.

    PubMed

    Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu

    2017-06-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).

  14. Privacy preserving processing of genomic data: A survey.

    PubMed

    Akgün, Mete; Bayrak, A Osman; Ozer, Bugra; Sağıroğlu, M Şamil

    2015-08-01

    Recently, the rapid advance in genome sequencing technology has led to production of huge amount of sensitive genomic data. However, a serious privacy challenge is confronted with increasing number of genetic tests as genomic data is the ultimate source of identity for humans. Lately, privacy threats and possible solutions regarding the undesired access to genomic data are discussed, however it is challenging to apply proposed solutions to real life problems due to the complex nature of security definitions. In this review, we have categorized pre-existing problems and corresponding solutions in more understandable and convenient way. Additionally, we have also included open privacy problems coming with each genomic data processing procedure. We believe our classification of genome associated privacy problems will pave the way for linking of real-life problems with previously proposed methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Asynchronous State Estimation for Discrete-Time Switched Complex Networks With Communication Constraints.

    PubMed

    Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li

    2018-05-01

    This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.

  16. Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.

    PubMed

    Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee

    2014-10-01

    Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.

  17. REVIEWS OF TOPICAL PROBLEMS: Axisymmetric stationary flows in compact astrophysical objects

    NASA Astrophysics Data System (ADS)

    Beskin, Vasilii S.

    1997-07-01

    A review is presented of the analytical results available for a large class of axisymmetric stationary flows in the vicinity of compact astrophysical objects. The determination of the two-dimensional structure of the poloidal magnetic field (hydrodynamic flow field) faces severe difficulties, due to the complexity of the trans-field equation for stationary axisymmetric flows. However, an approach exists which enables direct problems to be solved even within the balance law framework. This possibility arises when an exact solution to the equation is available and flows close to it are investigated. As a result, with the use of simple model problems, the basic features of supersonic flows past real compact objects are determined.

  18. Squeezing the Efimov effect

    NASA Astrophysics Data System (ADS)

    Sandoval, J. H.; Bellotti, F. F.; Yamashita, M. T.; Frederico, T.; Fedorov, D. V.; Jensen, A. S.; Zinner, N. T.

    2018-03-01

    The quantum mechanical three-body problem is a source of continuing interest due to its complexity and not least due to the presence of fascinating solvable cases. The prime example is the Efimov effect where infinitely many bound states of identical bosons can arise at the threshold where the two-body problem has zero binding energy. An important aspect of the Efimov effect is the effect of spatial dimensionality; it has been observed in three dimensional systems, yet it is believed to be impossible in two dimensions. Using modern experimental techniques, it is possible to engineer trap geometry and thus address the intricate nature of quantum few-body physics as function of dimensionality. Here we present a framework for studying the three-body problem as one (continuously) changes the dimensionality of the system all the way from three, through two, and down to a single dimension. This is done by considering the Efimov favorable case of a mass-imbalanced system and with an external confinement provided by a typical experimental case with a (deformed) harmonic trap.

  19. Complex networks for data-driven medicine: the case of Class III dentoskeletal disharmony

    NASA Astrophysics Data System (ADS)

    Scala, A.; Auconi, P.; Scazzocchio, M.; Caldarelli, G.; McNamara, JA; Franchi, L.

    2014-11-01

    In the last decade, the availability of innovative algorithms derived from complexity theory has inspired the development of highly detailed models in various fields, including physics, biology, ecology, economy, and medicine. Due to the availability of novel and ever more sophisticated diagnostic procedures, all biomedical disciplines face the problem of using the increasing amount of information concerning each patient to improve diagnosis and prevention. In particular, in the discipline of orthodontics the current diagnostic approach based on clinical and radiographic data is problematic due to the complexity of craniofacial features and to the numerous interacting co-dependent skeletal and dentoalveolar components. In this study, we demonstrate the capability of computational methods such as network analysis and module detection to extract organizing principles in 70 patients with excessive mandibular skeletal protrusion with underbite, a condition known in orthodontics as Class III malocclusion. Our results could possibly constitute a template framework for organising the increasing amount of medical data available for patients’ diagnosis.

  20. [One-stage reconstruction of zygomatico-orbital complex with the use of implants of different origin].

    PubMed

    Menabde, G T; Gvenetadze, Z V; Atskvereli, L Sh

    2009-03-01

    Reconstruction of zygomatico-orbital complex remains as one of the troublesome and topical problems at steady posttraumatic deformations and fresh traumas of the mentioned region. The present work provides analysis of our own experience of surgical treatment of patients suffering from posttraumatic deformations and defects of zygomatico-orbital complex. The work was based on the results of examination and treatment of 33 patients who underwent an operation during the period from 2003 to 2008 years. Of 33, 21 patients were operated due to fresh traumas of the zygomatico-orbital region, and 12 - due to steady posttraumatic deformations of the mentioned region. Of 33 clinical cases, 19 patients underwent reconstruction of zygomatico-orbital complex with the use of implant. In 11 cases implants were perforated titanic plates, in 6 cases - bone cement (Surgical Simplex P), and in 2 cases - combination of titanic plates with bone cement. The results of our investigations have shown that one-stage reconstruction of zygomatico-orbital complex with the use of titanic plates and bone cement liquidates functional and cosmetic disorders. It has been suggested that the use of elaborated complex approaches in treatment of posttraumatic deformations and fresh traumas of zygomatico-orbital region is reasonable and acceptable.

  1. Critical Success Factors (CSFs) for Implementation of Enterprise Resource Planning (ERP) Systems in Various Industries, Including Institutions of Higher Education (IHEs)

    ERIC Educational Resources Information Center

    Debrosse-Bruno, Marie Michael

    2017-01-01

    Enterprise Resource Planning (ERP) systems present a management problem for various industries including institutions of higher education (IHEs) because they are costly to acquire, challenging to implement, and often fail to meet anticipated expectations. ERP systems are highly complex due to the nature of the operations they support. This…

  2. System Thinking Scales and Learning Environment of Family Planning Field Workers in East Java, Indonesia

    ERIC Educational Resources Information Center

    Listyawardani, Dwi; Hariastuti, Iswari

    2016-01-01

    Systems thinking is needed due to the growing complexity of the problems faced family planning field workers in the external environment that is constantly changing. System thinking ability could not be separated from efforts to develop learning for the workers, both learning at the individual, group, or organization level. The design of the study…

  3. The Role of Search in University Productivity: Inside, outside, and Interdisciplinary Dimensions. NEBR Working Paper No. 15489

    ERIC Educational Resources Information Center

    Adams, James D.; Clemmons, J. Roger

    2009-01-01

    Due to improving information technology, the growing complexity of research problems, and policies designed to foster interdisciplinary research, the practice of science in the United States has undergone significant structural change. Using a sample of 110 top U.S. universities observed during the late 20th century we find that knowledge flows,…

  4. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  5. Assessing problem-solving skills in construction education with the virtual construction simulator

    NASA Astrophysics Data System (ADS)

    Castronovo, Fadi

    The ability to solve complex problems is an essential skill that a construction and project manager must possess when entering the architectural, engineering, and construction industry. Such ability requires a mixture of problem-solving skills, ranging from lower to higher order thinking skills, composed of cognitive and metacognitive processes. These skills include the ability to develop and evaluate construction plans and manage the execution of such plans. However, in a typical construction program, introducing students to such complex problems can be a challenge, and most commonly the learner is presented with only part of a complex problem. To support this challenge, the traditional methodology of delivering design, engineering, and construction instruction has been going through a technological revolution, due to the rise of computer-based technology. For example, in construction classrooms, and other disciplines, simulations and educational games are being utilized to support the development of problem-solving skills. Previous engineering education research has illustrated the high potential that simulations and educational games have in engaging in lower and higher order thinking skills. Such research illustrated their capacity to support the development of problem-solving skills. This research presents evidence supporting the theory that educational simulation games can help with the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems. The educational simulation game employed in this study is the Virtual Construction Simulator (VCS). The VCS is a game developed to provide students in an engaging learning activity that simulates the planning and managing phases of a construction project. Assessment of the third iteration of the VCS(3) game has shown pedagogical value in promoting students' motivation and a basic understanding of construction concepts. To further evaluate the benefits on problem-solving skills, a new version of the VCS(4) was developed, with new building modules and assessment framework. The design and development of the VCS4 leveraged research in educational psychology, multimedia learning, human-computer interaction, and Building Information Modeling. In this dissertation the researcher aimed to evaluate the pedagogical value of the VCS4 in fostering problem-solving skills. To answer the research questions, a crossover repeated measures quasi-experiment was designed to assess the educational gains that the VCS can provide to construction education. A group of 34 students, attending a fourth-year construction course at a university in the United States was chosen to participate in the experiment. The three learning modules of the VCS were used, which challenged the students to plan and manage the construction process of a wooden pavilion, the steel erection of a dormitory, and the concrete placement of the same dormitory. Based on the results the researcher was able to provide evidence supporting the hypothesis that the chosen sample of construction students were able to gain and retain problem-solving skills necessary to solve complex construction simulation problems, no matter what the sequence with which these modules were played. In conclusion, the presented results provide evidence supporting the theory that educational simulation games can help the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems.

  6. Fusing terrain and goals: agent control in urban environments

    NASA Astrophysics Data System (ADS)

    Kaptan, Varol; Gelenbe, Erol

    2006-04-01

    The changing face of contemporary military conflicts has forced a major shift of focus in tactical planning and evaluation from the classical Cold War battlefield to an asymmetric guerrilla-type warfare in densely populated urban areas. The new arena of conflict presents unique operational difficulties due to factors like complex mobility restrictions and the necessity to preserve civilian lives and infrastructure. In this paper we present a novel method for autonomous agent control in an urban environment. Our approach is based on fusing terrain information and agent goals for the purpose of transforming the problem of navigation in a complex environment with many obstacles into the easier problem of navigation in a virtual obstacle-free space. The main advantage of our approach is its ability to act as an adapter layer for a number of efficient agent control techniques which normally show poor performance when applied to an environment with many complex obstacles. Because of the very low computational and space complexity at runtime, our method is also particularly well suited for simulation or control of a huge number of agents (military as well as civilian) in a complex urban environment where traditional path-planning may be too expensive or where a just-in-time decision with hard real-time constraints is required.

  7. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  8. Algorithms for elasto-plastic-creep postbuckling

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1984-01-01

    This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.

  9. Tackling Complex Emergency Response Solutions Evaluation Problems in Sustainable Development by Fuzzy Group Decision Making Approaches with Considering Decision Hesitancy and Prioritization among Assessing Criteria.

    PubMed

    Qi, Xiao-Wen; Zhang, Jun-Ling; Zhao, Shu-Ping; Liang, Chang-Yong

    2017-10-02

    In order to be prepared against potential balance-breaking risks affecting economic development, more and more countries have recognized emergency response solutions evaluation (ERSE) as an indispensable activity in their governance of sustainable development. Traditional multiple criteria group decision making (MCGDM) approaches to ERSE have been facing simultaneous challenging characteristics of decision hesitancy and prioritization relations among assessing criteria, due to the complexity in practical ERSE problems. Therefore, aiming at the special type of ERSE problems that hold the two characteristics, we investigate effective MCGDM approaches by hiring interval-valued dual hesitant fuzzy set (IVDHFS) to comprehensively depict decision hesitancy. To exploit decision information embedded in prioritization relations among criteria, we firstly define an fuzzy entropy measure for IVDHFS so that its derivative decision models can avoid potential information distortion in models based on classic IVDHFS distance measures with subjective supplementing mechanism; further, based on defined entropy measure, we develop two fundamental prioritized operators for IVDHFS by extending Yager's prioritized operators. Furthermore, on the strength of above methods, we construct two hesitant fuzzy MCGDM approaches to tackle complex scenarios with or without known weights for decision makers, respectively. Finally, case studies have been conducted to show effectiveness and practicality of our proposed approaches.

  10. Tackling Complex Emergency Response Solutions Evaluation Problems in Sustainable Development by Fuzzy Group Decision Making Approaches with Considering Decision Hesitancy and Prioritization among Assessing Criteria

    PubMed Central

    Qi, Xiao-Wen; Zhang, Jun-Ling; Zhao, Shu-Ping; Liang, Chang-Yong

    2017-01-01

    In order to be prepared against potential balance-breaking risks affecting economic development, more and more countries have recognized emergency response solutions evaluation (ERSE) as an indispensable activity in their governance of sustainable development. Traditional multiple criteria group decision making (MCGDM) approaches to ERSE have been facing simultaneous challenging characteristics of decision hesitancy and prioritization relations among assessing criteria, due to the complexity in practical ERSE problems. Therefore, aiming at the special type of ERSE problems that hold the two characteristics, we investigate effective MCGDM approaches by hiring interval-valued dual hesitant fuzzy set (IVDHFS) to comprehensively depict decision hesitancy. To exploit decision information embedded in prioritization relations among criteria, we firstly define an fuzzy entropy measure for IVDHFS so that its derivative decision models can avoid potential information distortion in models based on classic IVDHFS distance measures with subjective supplementing mechanism; further, based on defined entropy measure, we develop two fundamental prioritized operators for IVDHFS by extending Yager’s prioritized operators. Furthermore, on the strength of above methods, we construct two hesitant fuzzy MCGDM approaches to tackle complex scenarios with or without known weights for decision makers, respectively. Finally, case studies have been conducted to show effectiveness and practicality of our proposed approaches. PMID:28974045

  11. Fluorophore Metal-Organic Complexes: High-Throughput Optical Screening for Aprotic Electrochemical Systems.

    PubMed

    Park, Sung Hyeon; Choi, Chang Hyuck; Lee, Seung Yong; Woo, Seong Ihl

    2017-02-13

    Combinatorial optical screening of aprotic electrocatalysts has not yet been achieved primarily due to H + -associated mechanisms of fluorophore modulation. We have overcome this problem by using fluorophore metal-organic complexes. In particular, eosin Y and quinine can be coordinated with various metallic cations (e.g., Li + , Na + , Mg 2+ , Zn 2+ , and Al 3+ ) in aprotic solvents, triggering changes in their fluorescent properties. These interactions have been used in a reliable screening method to determine oxygen reduction/evolution reaction activities of 100 Mn-based binary catalysts for the aprotic Li-air battery.

  12. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  13. Complex space monofilar approximation of diffraction currents on a conducting half plane

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Simple approximation of diffraction surface currents on a conducting half plane, due to an incoming plane wave, is obtained with a line current (monofile) in complex space. When compared to an approximating current at the edge, the diffraction pattern is seen to improve by an order of magnitude for a minimal increase of computation effort. Thus, the inconvient Fresnel integral functions can be avoided for quick calculations of diffracted fields and the accuracy is good in other directions than along the half plane. The method can be applied to general problems involving planar metal edges.

  14. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  15. Modelling DC responses of 3D complex fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  16. Modelling DC responses of 3D complex fracture networks

    DOE PAGES

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    2018-03-01

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  17. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  18. Children's environment and health in Latin America: the Ecuadorian case.

    PubMed

    Harari, Raul; Harari, Homero

    2006-09-01

    Environmental health problems of children in Latin America and Ecuador are complex due to the close relationship that exists between social and environmental factors. Extended poverty and basic problems, such as the lack of drinking water and sanitation, are common. Infectious diseases are the greatest cause of morbidity and mortality among children. Development in industry and the introduction of chemical substances in agriculture add new risks including pesticide use, heavy metal exposure, and air pollution. Major problems can be divided into (a) lack of basic infrastructure, (b) poor living conditions, (c) specific environmental problems, and (d) child labor. Reproductive health disorders are frequent in developing countries like Ecuador. Issues related to children's health should consider new approaches, creative methodologies, and the search for independent predictors to separate environmental from social problems. Only with knowledge of the specific contribution of each factor, can it be possible to develop a strategy for prevention.

  19. Lost in the crowd? Using eye-tracking to investigate the effect of complexity on attribute non-attendance in discrete choice experiments.

    PubMed

    Spinks, Jean; Mortimer, Duncan

    2016-02-03

    The provision of additional information is often assumed to improve consumption decisions, allowing consumers to more accurately weigh the costs and benefits of alternatives. However, increasing the complexity of decision problems may prompt changes in information processing. This is particularly relevant for experimental methods such as discrete choice experiments (DCEs) where the researcher can manipulate the complexity of the decision problem. The primary aims of this study are (i) to test whether consumers actually process additional information in an already complex decision problem, and (ii) consider the implications of any such 'complexity-driven' changes in information processing for design and analysis of DCEs. A discrete choice experiment (DCE) is used to simulate a complex decision problem; here, the choice between complementary and conventional medicine for different health conditions. Eye-tracking technology is used to capture the number of times and the duration that a participant looks at any part of a computer screen during completion of DCE choice sets. From this we can analyse what has become known in the DCE literature as 'attribute non-attendance' (ANA). Using data from 32 participants, we model the likelihood of ANA as a function of choice set complexity and respondent characteristics using fixed and random effects models to account for repeated choice set completion. We also model whether participants are consistent with regard to which characteristics (attributes) they consider across choice sets. We find that complexity is the strongest predictor of ANA when other possible influences, such as time pressure, ordering effects, survey specific effects and socio-demographic variables (including proxies for prior experience with the decision problem) are considered. We also find that most participants do not apply a consistent information processing strategy across choice sets. Eye-tracking technology shows promise as a way of obtaining additional information from consumer research, improving DCE design, and informing the design of policy measures. With regards to DCE design, results from the present study suggest that eye-tracking data can identify the point at which adding complexity (and realism) to DCE choice scenarios becomes self-defeating due to unacceptable increases in ANA. Eye-tracking data therefore has clear application in the construction of guidelines for DCE design and during piloting of DCE choice scenarios. With regards to design of policy measures such as labelling requirements for CAM and conventional medicines, the provision of additional information has the potential to make difficult decisions even harder and may not have the desired effect on decision-making.

  20. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  1. Complex Langevin simulation of QCD at finite density and low temperature using the deformation technique

    NASA Astrophysics Data System (ADS)

    Nagata, Keitro; Nishimura, Jun; Shimasaki, Shinji

    2018-03-01

    We study QCD at finite density and low temperature by using the complex Langevin method. We employ the gauge cooling to control the unitarity norm and intro-duce a deformation parameter in the Dirac operator to avoid the singular-drift problem. The reliability of the obtained results are judged by the probability distribution of the magnitude of the drift term. By making extrapolations with respect to the deformation parameter using only the reliable results, we obtain results for the original system. We perform simulations on a 43 × 8 lattice and show that our method works well even in the region where the reweighing method fails due to the severe sign problem. As a result we observe a delayed onset of the baryon number density as compared with the phase-quenched model, which is a clear sign of the Silver Blaze phenomenon.

  2. Technology-Based Assessments for 21st Century Skills: Theoretical and Practical Implications from Modern Research. Current Perspectives on Cognition, Learning and Instruction

    ERIC Educational Resources Information Center

    Mayrath, Michael C., Ed.; Clarke-Midura, Jody, Ed.; Robinson, Daniel H., Ed.; Schraw, Gregory, Ed.

    2012-01-01

    Creative problem solving, collaboration, and technology fluency are core skills requisite of any nation's workforce that strives to be competitive in the 21st Century. Teaching these types of skills is an economic imperative, and assessment is a fundamental component of any pedagogical program. Yet, measurement of these skills is complex due to…

  3. Application of the Convergence Technique to Basic Studies of the Reading Process. Final Report.

    ERIC Educational Resources Information Center

    Gephart, William J.

    This study covers a program of research on problems in the area of reading undertaken and supported by the U. S. Office of Education. Due to the effectiveness of the Convergence Technique in the planning and management of complex programs of bio-medical research, this project is undertaken to develop plans for the application of this technique in…

  4. Pedagogical Conditions of Formation of Professional Competence of Future Music Teachers on the Basis of an Interdisciplinary Approach

    ERIC Educational Resources Information Center

    Gromova, Chulpan R.; Saitova, Lira R.

    2016-01-01

    The relevance of research problem is due to the need for music teacher with a high level of formation of professional competence determination of the content and principles of an interdisciplinary approach to its formation. The aim of the article lies in development and testing of complex of the pedagogical conditions in formation of professional…

  5. Epistemic Evaluation of the Training and Managerial Competence Development Process

    ERIC Educational Resources Information Center

    Machado, Evelio F.; Zambrano, Marcos T.; Montes de Oca, Nancy

    2015-01-01

    This article presents the problem of defining the concept of "competence", due to it being an integral and complex term that has been applied in many domains as well as in a more general sense for everyday life. However, no doubt, a competence can only be tested and valuated in the practice, and it is a person who becomes competent in a…

  6. Computational problems and signal processing in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard

    1991-01-01

    The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.

  7. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  8. Automatically Detect and Track Multiple Fish Swimming in Shallow Water with Frequent Occlusion

    PubMed Central

    Qian, Zhi-Ming; Cheng, Xi En; Chen, Yan Qiu

    2014-01-01

    Due to its universality, swarm behavior in nature attracts much attention of scientists from many fields. Fish schools are examples of biological communities that demonstrate swarm behavior. The detection and tracking of fish in a school are of important significance for the quantitative research on swarm behavior. However, different from other biological communities, there are three problems in the detection and tracking of fish school, that is, variable appearances, complex motion and frequent occlusion. To solve these problems, we propose an effective method of fish detection and tracking. In this method, first, the fish head region is positioned through extremum detection and ellipse fitting; second, The Kalman filtering and feature matching are used to track the target in complex motion; finally, according to the feature information obtained by the detection and tracking, the tracking problems caused by frequent occlusion are processed through trajectory linking. We apply this method to track swimming fish school of different densities. The experimental results show that the proposed method is both accurate and reliable. PMID:25207811

  9. Immersed boundary methods for simulating fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Yang, Xiaolei

    2014-02-01

    Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.

  10. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  11. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology

    PubMed Central

    Cankorur-Cetinkaya, Ayca; Dias, Joao M. L.; Kludas, Jana; Slater, Nigel K. H.; Rousu, Juho; Dikicioglu, Duygu

    2017-01-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple‐to‐use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257). PMID:28635591

  12. Complex Langevin dynamics and zeroes of the fermion determinant

    NASA Astrophysics Data System (ADS)

    Aarts, Gert; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu

    2017-05-01

    QCD at nonzero baryon chemical potential suffers from the sign problem, due to the complex quark determinant. Complex Langevin dynamics can provide a solution, provided certain conditions are met. One of these conditions, holomorphicity of the Langevin drift, is absent in QCD since zeroes of the determinant result in a meromorphic drift. We first derive how poles in the drift affect the formal justification of the approach and then explore the various possibilities in simple models. The lessons from these are subsequently applied to both heavy dense QCD and full QCD, and we find that the results obtained show a consistent picture. We conclude that with careful monitoring, the method can be justified a posteriori, even in the presence of meromorphicity.

  13. Fluctuating residual limb volume accommodated with an adjustable, modular socket design: A novel case report.

    PubMed

    Mitton, Kay; Kulkarni, Jai; Dunn, Kenneth William; Ung, Anthony Hoang

    2017-10-01

    This novel case report describes the problems of prescribing a prosthetic socket in a left transfemoral amputee secondary to chronic patellofemoral instability compounded by complex regional pain syndrome. Case Description and Methods: Following the amputation, complex regional pain syndrome symptoms recurred in the residual limb, presenting mainly with oedema. Due to extreme daily volume fluctuations of the residual limb, a conventional, laminated thermoplastic socket fitting was not feasible. Findings and Outcomes: An adjustable, modular socket design was trialled. The residual limb volume fluctuations were accommodated within the socket. Amputee rehabilitation could be continued, and the rehabilitation goals were achieved. The patient was able to wear the prosthesis for 8 h daily and to walk unaided indoors and outdoors. An adjustable, modular socket design accommodated the daily residual limb volume fluctuations and provided a successful outcome in this case. It demonstrates the complexities of socket fitting and design with volume fluctuations. Clinical relevance Ongoing complex regional pain syndrome symptoms within the residual limb can lead to fitting difficulties in a conventional, laminated thermoplastic socket due to volume fluctuations. An adjustable, modular socket design can accommodate this and provide a successful outcome.

  14. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  15. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    NASA Astrophysics Data System (ADS)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  16. Use of dispersion modelling for Environmental Impact Assessment of biological air pollution from composting: Progress, problems and prospects.

    PubMed

    Douglas, P; Hayes, E T; Williams, W B; Tyrrel, S F; Kinnersley, R P; Walsh, K; O'Driscoll, M; Longhurst, P J; Pollard, S J T; Drew, G H

    2017-12-01

    With the increase in composting asa sustainable waste management option, biological air pollution (bioaerosols) from composting facilities have become a cause of increasing concern due to their potential health impacts. Estimating community exposure to bioaerosols is problematic due to limitations in current monitoring methods. Atmospheric dispersion modelling can be used to estimate exposure concentrations, however several issues arise from the lack of appropriate bioaerosol data to use as inputs into models, and the complexity of the emission sources at composting facilities. This paper analyses current progress in using dispersion models for bioaerosols, examines the remaining problems and provides recommendations for future prospects in this area. A key finding is the urgent need for guidance for model users to ensure consistent bioaerosol modelling practices. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  18. Lattice Boltzmann modeling of transport phenomena in fuel cells and flow batteries

    NASA Astrophysics Data System (ADS)

    Xu, Ao; Shyy, Wei; Zhao, Tianshou

    2017-06-01

    Fuel cells and flow batteries are promising technologies to address climate change and air pollution problems. An understanding of the complex multiscale and multiphysics transport phenomena occurring in these electrochemical systems requires powerful numerical tools. Over the past decades, the lattice Boltzmann (LB) method has attracted broad interest in the computational fluid dynamics and the numerical heat transfer communities, primarily due to its kinetic nature making it appropriate for modeling complex multiphase transport phenomena. More importantly, the LB method fits well with parallel computing due to its locality feature, which is required for large-scale engineering applications. In this article, we review the LB method for gas-liquid two-phase flows, coupled fluid flow and mass transport in porous media, and particulate flows. Examples of applications are provided in fuel cells and flow batteries. Further developments of the LB method are also outlined.

  19. Issues with RNA-seq analysis in non-model organisms: A salmonid example.

    PubMed

    Sundaram, Arvind; Tengs, Torstein; Grimholt, Unni

    2017-10-01

    High throughput sequencing (HTS) is useful for many purposes as exemplified by the other topics included in this special issue. The purpose of this paper is to look into the unique challenges of using this technology in non-model organisms where resources such as genomes, functional genome annotations or genome complexity provide obstacles not met in model organisms. To describe these challenges, we narrow our scope to RNA sequencing used to study differential gene expression in response to pathogen challenge. As a demonstration species we chose Atlantic salmon, which has a sequenced genome with poor annotation and an added complexity due to many duplicated genes. We find that our RNA-seq analysis pipeline deciphers between duplicates despite high sequence identity. However, annotation issues provide problems in linking differentially expressed genes to pathways. Also, comparing results between approaches and species are complicated due to lack of standardized annotation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Influence of the electron-cation interaction on electron mobility in dye-sensitized ZnO and TiO2 nanocrystals: a study using ultrafast terahertz spectroscopy.

    PubMed

    Nemec, H; Rochford, J; Taratula, O; Galoppini, E; Kuzel, P; Polívka, T; Yartsev, A; Sundström, V

    2010-05-14

    Charge transport and recombination in nanostructured semiconductors are poorly understood key processes in dye-sensitized solar cells. We have employed time-resolved spectroscopies in the terahertz and visible spectral regions supplemented with Monte Carlo simulations to obtain unique information on these processes. Our results show that charge transport in the active solar cell material can be very different from that in nonsensitized semiconductors, due to strong electrostatic interaction between injected electrons and dye cations at the surface of the semiconductor nanoparticle. For ZnO, this leads to formation of an electron-cation complex which causes fast charge recombination and dramatically decreases the electron mobility even after the dissociation of the complex. Sensitized TiO2 does not suffer from this problem due to its high permittivity efficiently screening the charges.

  1. Supersonic projectile models for asynchronous shooter localization

    NASA Astrophysics Data System (ADS)

    Kozick, Richard J.; Whipps, Gene T.; Ash, Joshua N.

    2011-06-01

    In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this paper we study efficient solutions for the localization problem and identify deceleration models that trade off localization accuracy and computational complexity. We also develop a statistical analysis that includes bias due to mismatch between the true and actual deceleration models and covariance due to additive noise.

  2. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  3. Detection of expression quantitative trait Loci in complex mouse crosses: impact and alleviation of data quality and complex population substructure.

    PubMed

    Iancu, Ovidiu D; Darakjian, Priscila; Kawane, Sunita; Bottomly, Daniel; Hitzemann, Robert; McWeeney, Shannon

    2012-01-01

    Complex Mus musculus crosses, e.g., heterogeneous stock (HS), provide increased resolution for quantitative trait loci detection. However, increased genetic complexity challenges detection methods, with discordant results due to low data quality or complex genetic architecture. We quantified the impact of theses factors across three mouse crosses and two different detection methods, identifying procedures that greatly improve detection quality. Importantly, HS populations have complex genetic architectures not fully captured by the whole genome kinship matrix, calling for incorporating chromosome specific relatedness information. We analyze three increasingly complex crosses, using gene expression levels as quantitative traits. The three crosses were an F(2) intercross, a HS formed by crossing four inbred strains (HS4), and a HS (HS-CC) derived from the eight lines found in the collaborative cross. Brain (striatum) gene expression and genotype data were obtained using the Illumina platform. We found large disparities between methods, with concordance varying as genetic complexity increased; this problem was more acute for probes with distant regulatory elements (trans). A suite of data filtering steps resulted in substantial increases in reproducibility. Genetic relatedness between samples generated overabundance of detected eQTLs; an adjustment procedure that includes the kinship matrix attenuates this problem. However, we find that relatedness between individuals is not evenly distributed across the genome; information from distinct chromosomes results in relatedness structure different from the whole genome kinship matrix. Shared polymorphisms from distinct chromosomes collectively affect expression levels, confounding eQTL detection. We suggest that considering chromosome specific relatedness can result in improved eQTL detection.

  4. Wireless sensing and vibration control with increased redundancy and robustness design.

    PubMed

    Li, Peng; Li, Luyu; Song, Gangbing; Yu, Yan

    2014-11-01

    Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.

  5. Carbohydrate-Based Host-Guest Complexation of Hydrophobic Antibiotics for the Enhancement of Antibacterial Activity.

    PubMed

    Jeong, Daham; Joo, Sang-Woo; Shinde, Vijay Vilas; Cho, Eunae; Jung, Seunho

    2017-08-08

    Host-guest complexation with various hydrophobic drugs has been used to enhance the solubility, permeability, and stability of guest drugs. Physical changes in hydrophobic drugs by complexation have been related to corresponding increases in the bioavailability of these drugs. Carbohydrates, including various derivatives of cyclodextrins, cyclosophoraoses, and some linear oligosaccharides, are generally used as host complexation agents in drug delivery systems. Many antibiotics with low bioavailability have some limitations to their clinical use due to their intrinsically poor aqueous solubility. Bioavailability enhancement is therefore an important step to achieve the desired concentration of antibiotics in the treatment of bacterial infections. Antibiotics encapsulated in a complexation-based drug delivery system will display improved antibacterial activity making it possible to reduce dosages and overcome the serious global problem of antibiotic resistance. Here, we review the present research trends in carbohydrate-based host-guest complexation of various hydrophobic antibiotics as an efficient delivery system to improve solubility, permeability, stability, and controlled release.

  6. [The potential financial impact of oral health problems in the families of preschool children].

    PubMed

    Ribeiro, Gustavo Leite; Gomes, Monalisa Cesarino; de Lima, Kenio Costa; Martins, Carolina Castro; Paiva, Saul Martins; Granville-Garcia, Ana Flávia

    2016-04-01

    The aim of the study was to evaluate the perception of parents/caregivers regarding the financial impact of oral health problems on the families of preschool children. A preschool-based, cross-sectional study was conducted with 834 preschool children in Campina Grande, Brazil. Parents/caregivers answered the Early Childhood Oral Health Impact Scale. "Financial impact" was the dependent variable. Questionnaires addressing socio-demographic variables, history of toothache and health perceptions were administered. Clinical exams were performed by three dentists previously calibrated (Kappa: 0.85-0.90). Descriptive statistics were performed, followed by logistic regression for complex samples (α = 5%). The frequency of financial impact due to oral health problems in preschool children was 7.7%. The following variables were significantly associated with financial impact: parental perception of child's oral health as poor, the interaction between history of toothache and absence of dental caries and the interaction between history of toothache and presence of dental caries. It is concluded that often parents/caregivers reported experiencing a financial impact due to seeking treatment late, mainly by the presence of toothache and complications of the clinical condition.

  7. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  8. Complex Problem Solving: What It Is and What It Is Not

    PubMed Central

    Dörner, Dietrich; Funke, Joachim

    2017-01-01

    Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242

  9. Recent progress in heteronuclear long-range NMR of complex carbohydrates: 3D H2BC and clean HMBC.

    PubMed

    Meier, Sebastian; Petersen, Bent O; Duus, Jens Ø; Sørensen, Ole W

    2009-11-02

    The new NMR experiments 3D H2BC and clean HMBC are explored for challenging applications to a complex carbohydrate at natural abundance of (13)C. The 3D H2BC experiment is crucial for sequential assignment as it yields heteronuclear one- and two-bond together with COSY correlations for the (1)H spins, all in a single spectrum with good resolution and non-informative diagonal-type peaks suppressed. Clean HMBC is a remedy for the ubiquitous problem of strong coupling induced one-bond correlation artifacts in HMBC spectra of carbohydrates. Both experiments work well for one of the largest carbohydrates whose structure has been determined by NMR, not least due to the enhanced resolution offered by the third dimension in 3D H2BC and the improved spectral quality due to artifact suppression in clean HMBC. Hence these new experiments set the scene to take advantage of the sensitivity boost achieved by the latest generation of cold probes for NMR structure determination of even larger and more complex carbohydrates in solution.

  10. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  11. Control and instanton trajectories for random transitions in turbulent flows

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    Many turbulent systems exhibit random switches between qualitatively different attractors. The transition between these bistable states is often an extremely rare event, that can not be computed through DNS, due to complexity limitations. We present results for the calculation of instanton trajectories (a control problem) between non-equilibrium stationary states (attractors) in the 2D stochastic Navier-Stokes equations. By representing the transition probability between two states using a path integral formulation, we can compute the most probable trajectory (instanton) joining two non-equilibrium stationary states. Technically, this is equivalent to the minimization of an action, which can be related to a fluid mechanics control problem.

  12. Adaptation Problems of the Post Industrial Heritage on the Example of Selected Objects of Bydgoszcz

    NASA Astrophysics Data System (ADS)

    Pszczółkowski, Michał

    2016-09-01

    Post-industrial architecture was until recently regarded as devoid of value and importance due to obsolescence, but this awareness has been a clear change in recent years. The old factories become full-fledged cultural heritage, as evidenced by the inclusion of buildings and complexes of this type in the register of monuments and protected by their conservator. More and more often, therefore, one undertakes revitalization of degraded brownfield sites, and within these treatments - conversion works. Specific issues and problems related to the adaptation of industrial facilities are discussed in the article on the basis of selected examples, completed in recent years in Bydgoszcz.

  13. Discrete Surface Evolution and Mesh Deformation for Aircraft Icing Applications

    NASA Technical Reports Server (NTRS)

    Thompson, David; Tong, Xiaoling; Arnoldus, Qiuhan; Collins, Eric; McLaurin, David; Luke, Edward; Bidwell, Colin S.

    2013-01-01

    Robust, automated mesh generation for problems with deforming geometries, such as ice accreting on aerodynamic surfaces, remains a challenging problem. Here we describe a technique to deform a discrete surface as it evolves due to the accretion of ice. The surface evolution algorithm is based on a smoothed, face-offsetting approach. We also describe a fast algebraic technique to propagate the computed surface deformations into the surrounding volume mesh while maintaining geometric mesh quality. Preliminary results presented here demonstrate the ecacy of the approach for a sphere with a prescribed accretion rate, a rime ice accretion, and a more complex glaze ice accretion.

  14. The design of multiplayer online video game systems

    NASA Astrophysics Data System (ADS)

    Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.

  15. Schedule Risks Due to Delays in Advanced Technology Development

    NASA Technical Reports Server (NTRS)

    Reeves, John D. Jr.; Kayat, Kamal A.; Lim, Evan

    2008-01-01

    This paper discusses a methodology and modeling capability that probabilistically evaluates the likelihood and impacts of delays in advanced technology development prior to the start of design, development, test, and evaluation (DDT&E) of complex space systems. The challenges of understanding and modeling advanced technology development considerations are first outlined, followed by a discussion of the problem in the context of lunar surface architecture analysis. The current and planned methodologies to address the problem are then presented along with sample analyses and results. The methodology discussed herein provides decision-makers a thorough understanding of the schedule impacts resulting from the inclusion of various enabling advanced technology assumptions within system design.

  16. Towards practical multiscale approach for analysis of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Moyeda, Arturo; Fish, Jacob

    2017-12-01

    We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.

  17. Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-02-01

    This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.

  18. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    NASA Astrophysics Data System (ADS)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  19. Cyber-Physical Trade-Offs in Distributed Detection Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.

    2010-01-01

    We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less

  20. Computation of eigenpairs of Ax = lambda Bx for vibrations of spinning deformable bodies

    NASA Technical Reports Server (NTRS)

    Utku, S.; Clemente, J. L. M.

    1984-01-01

    It is shown that, when linear theory is used, the general eigenvalue problem related with the free vibrations of spinning deformable bodies is of the type AX = lambda Bx, where A is Hermitian, and B is real positive definite. Since the order n of the matrices may be large, and A and B are banded or block banded, due to the economics of the numerical solution, one is interested in obtaining only those eigenvalues which fall within the frequency band of interest of the problem. The paper extends the well known method of bisections and iteration of R to the n power to n dimensional complex spaces, i.e., to C to the n power, so that it can be applied to the present problem.

  1. Coupling reconstruction and motion estimation for dynamic MRI through optical flow constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ningning; O'Connor, Daniel; Gu, Wenbo; Ruan, Dan; Basarab, Adrian; Sheng, Ke

    2018-03-01

    This paper addresses the problem of dynamic magnetic resonance image (DMRI) reconstruction and motion estimation jointly. Because of the inherent anatomical movements in DMRI acquisition, reconstruction of DMRI using motion estimation/compensation (ME/MC) has been explored under the compressed sensing (CS) scheme. In this paper, by embedding the intensity based optical flow (OF) constraint into the traditional CS scheme, we are able to couple the DMRI reconstruction and motion vector estimation. Moreover, the OF constraint is employed in a specific coarse resolution scale in order to reduce the computational complexity. The resulting optimization problem is then solved using a primal-dual algorithm due to its efficiency when dealing with nondifferentiable problems. Experiments on highly accelerated dynamic cardiac MRI with multiple receiver coils validate the performance of the proposed algorithm.

  2. A Method for Counting Moving People in Video Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  3. Lambda Red-mediated mutagenesis and efficient large scale affinity purification of the Escherichia coli NADH:ubiquinone oxidoreductase (complex I).

    PubMed

    Pohl, Thomas; Uhlmann, Mareike; Kaufenstein, Miriam; Friedrich, Thorsten

    2007-09-18

    The proton-pumping NADH:ubiquinone oxidoreductase, the respiratory complex I, couples the transfer of electrons from NADH to ubiquinone with the translocation of protons across the membrane. The Escherichia coli complex I consists of 13 different subunits named NuoA-N (from NADH:ubiquinone oxidoreductase), that are coded by the genes of the nuo-operon. Genetic manipulation of the operon is difficult due to its enormous size. The enzymatic activity of variants is obscured by an alternative NADH dehydrogenase, and purification of the variants is hampered by their instability. To overcome these problems the entire E. coli nuo-operon was cloned and placed under control of the l-arabinose inducible promoter ParaBAD. The exposed N-terminus of subunit NuoF was chosen for engineering the complex with a hexahistidine-tag by lambda-Red-mediated recombineering. Overproduction of the complex from this construct in a strain which is devoid of any membrane-bound NADH dehydrogenase led to the assembly of a catalytically active complex causing the entire NADH oxidase activity of the cytoplasmic membranes. After solubilization with dodecyl maltoside the engineered complex binds to a Ni2+-iminodiacetic acid matrix allowing the purification of approximately 11 mg of complex I from 25 g of cells. The preparation is pure and monodisperse and comprises all known subunits and cofactors. It contains more lipids than earlier preparations due to the gentle and fast purification procedure. After reconstitution in proteoliposomes it couples the electron transfer with proton translocation in an inhibitor sensitive manner, thus meeting all prerequisites for structural and functional studies.

  4. Multiphysics analysis of liquid metal annular linear induction pumps: A project overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maidana, Carlos Omar; Nieminen, Juha E.

    Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less

  5. Multiphysics analysis of liquid metal annular linear induction pumps: A project overview

    DOE PAGES

    Maidana, Carlos Omar; Nieminen, Juha E.

    2016-03-14

    Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less

  6. Complexity, information loss, and model building: from neuro- to cognitive dynamics

    NASA Astrophysics Data System (ADS)

    Arecchi, F. Tito

    2007-06-01

    A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.

  7. Genome-wide detection of intervals of genetic heterogeneity associated with complex traits

    PubMed Central

    Llinares-López, Felipe; Grimm, Dominik G.; Bodenham, Dean A.; Gieraths, Udo; Sugiyama, Mahito; Rowan, Beth; Borgwardt, Karsten

    2015-01-01

    Motivation: Genetic heterogeneity, the fact that several sequence variants give rise to the same phenotype, is a phenomenon that is of the utmost interest in the analysis of complex phenotypes. Current approaches for finding regions in the genome that exhibit genetic heterogeneity suffer from at least one of two shortcomings: (i) they require the definition of an exact interval in the genome that is to be tested for genetic heterogeneity, potentially missing intervals of high relevance, or (ii) they suffer from an enormous multiple hypothesis testing problem due to the large number of potential candidate intervals being tested, which results in either many false positives or a lack of power to detect true intervals. Results: Here, we present an approach that overcomes both problems: it allows one to automatically find all contiguous sequences of single nucleotide polymorphisms in the genome that are jointly associated with the phenotype. It also solves both the inherent computational efficiency problem and the statistical problem of multiple hypothesis testing, which are both caused by the huge number of candidate intervals. We demonstrate on Arabidopsis thaliana genome-wide association study data that our approach can discover regions that exhibit genetic heterogeneity and would be missed by single-locus mapping. Conclusions: Our novel approach can contribute to the genome-wide discovery of intervals that are involved in the genetic heterogeneity underlying complex phenotypes. Availability and implementation: The code can be obtained at: http://www.bsse.ethz.ch/mlcb/research/bioinformatics-and-computational-biology/sis.html. Contact: felipe.llinares@bsse.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072488

  8. Development and application of incrementally complex tools for wind turbine aerodynamics

    NASA Astrophysics Data System (ADS)

    Gundling, Christopher H.

    Advances and availability of computational resources have made wind farm design using simulation tools a reality. Wind farms are battling two issues, affecting the cost of energy, that will make or break many future investments in wind energy. The most significant issue is the power reduction of downstream turbines operating in the wake of upstream turbines. The loss of energy from wind turbine wakes is difficult to predict and the underestimation of energy losses due to wakes has been a common problem throughout the industry. The second issue is a shorter lifetime of blades and past failures of gearboxes due to increased fluctuations in the unsteady loading of waked turbines. The overall goal of this research is to address these problems by developing a platform for a multi-fidelity wind turbine aerodynamic performance and wake prediction tool. Full-scale experiments in the field have dramatically helped researchers understand the unique issues inside a large wind farm, but experimental methods can only be used to a limited extent due to the cost of such field studies and the size of wind farms. The uncertainty of the inflow is another inherent drawback of field experiments. Therefore, computational fluid dynamics (CFD) predictions, strategically validated using carefully performed wind farm field campaigns, are becoming a more standard design practice. The developed CFD models include a blade element model (BEM) code with a free-vortex wake, an actuator disk or line based method with large eddy simulations (LES) and a fully resolved rotor based method with detached eddy simulations (DES) and adaptive mesh refinement (AMR). To create more realistic simulations, performance of a one-way coupling between different mesoscale atmospheric boundary layer (ABL) models and the three microscale CFD solvers is tested. These methods are validated using data from incrementally complex test cases that include the NREL Phase VI wind tunnel test, the Sexbierum wind farm and the Lillgrund offshore wind farm. By cross-comparing the lowest complexity free-vortex method with the higher complexity methods, a fast and accurate simulation tool has been generated that can perform wind farm simulations in a few hours.

  9. Spectral-element simulations of wave propagation in complex exploration-industry models: Mesh generation and forward simulations

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Luo, Y.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic-wave propagation in exploration-industry settings has seen major research and development efforts for decades, yet large-scale applications have often been limited to 2D or 3D finite-difference, (visco- )acoustic wave propagation due to computational limitations. We explore the possibility of including all relevant physical signatures in the wavefield using the spectral- element method (SPECFEM3D, SPECFEM2D), thereby accounting for acoustic, (visco-)elastic, poroelastic, anisotropic wave propagation in meshes which honor all crucial discontinuities. Mesh design is the crux of the problem, and we use CUBIT (Sandia Laboratories) to generate unstructured quadrilateral 2D and hexahedral 3D meshes for these complex background models. While general hexahedral mesh generation is an unresolved problem, we are able to accommodate most of the relevant settings (e.g., layer-cake models, salt bodies, overthrusting faults, and strong topography) with respectively tailored workflows. 2D simulations show localized, characteristic wave effects due to these features that shall be helpful in designing survey acquisition geometries in a relatively economic fashion. We address some of the fundamental issues this comprehensive modeling approach faces regarding its feasibility: Assessing geological structures in terms of the necessity to honor the major structural units, appropriate velocity model interpolation, quality control of the resultant mesh, and computational cost for realistic settings up to frequencies of 40 Hz. The solution to this forward problem forms the basis for subsequent 2D and 3D adjoint tomography within this context, which is the subject of a companion paper.

  10. Exploring the Role of Intrinsic Nodal Activation on the Spread of Influence in Complex Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Visweswara Sathanur, Arun; Halappanavar, Mahantesh; Shi, Yi

    In many complex networked systems such as online social networks, at any given time, activity originates at certain nodes and subsequently spreads on the network through influence. To model the spread of influence in such a scenario, we consider the problem of identification of influential entities in a complex network when nodal activation can happen through two different mechanisms. The first mode of activation is due mechanisms intrinsic to the node. The second mechanism is through the influence of connected neighbors. In this work, we present a simple probabilistic formulation that models such self-evolving systems where information diffusion occurs primarilymore » because of the intrinsic activity of users and the spread of activity occurs due to influence. We provide an algorithm to mine for the influential seeds in such a scenario by modifying the well-known influence maximization framework with the independent cascade diffusion model. We provide small motivating examples to provide an intuitive understanding of the effect of including the intrinsic activation mechanism. We sketch a proof of the submodularity of the influence function under the new formulation and demonstrate the same with larger graphs. We then show by means of additional experiments on a real-world twitter dataset how the formulation can be applied to real-world social media datasets. Finally we derive a computationally efficient centrality metric that takes into account, both the mechanisms of activation and provides for an accurate as well as computationally efficient alternative approach to the problem of identifying influencers under intrinsic activation.« less

  11. Supply network configuration—A benchmarking problem

    NASA Astrophysics Data System (ADS)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  12. [Respiratory manifestations in aspergillosis].

    PubMed

    Regimbaud, M

    1986-01-01

    Aspergillus is a genus of cosmopolitan fungi with a selective pulmonary tropism. Their pathogenic role is due either to spreading in pre-existing pulmonary cavities, or to their allergizing capacity. Cavitary sequellae of tuberculosis and suppuration, particularly frequent and important in tropical environment, are elective localization for Aspergillus colonization. Surgical treatment is nowadays the only efficient one. Allergic manifestations are a more complex problem of therapy, exclusion of allergen being difficult to get in tropical environment.

  13. Secure Fingerprint Identification of High Accuracy

    DTIC Science & Technology

    2014-01-01

    secure ) solution of complexity O(n3) based on Gaussian elimination. When it is applied to biometrics X and Y with mX and mY minutiae, respectively...collections of biometric data in use today include, for example, fingerprint, face, and iris images collected by the US Department of Homeland Security ...work we focus on fingerprint data due to popularity and good accuracy of this type of biometry. We formulate the problem of private, or secure , finger

  14. Parallel-Computing Architecture for JWST Wavefront-Sensing Algorithms

    DTIC Science & Technology

    2011-09-01

    results due to the increasing cost and complexity of each test. 2. ALGORITHM OVERVIEW Phase retrieval is an image-based wavefront-sensing...broadband illumination problems we have found that hand-tuning the right matrix sizes can account for a speedup of 86x faster. This comes from hand-picking...Wavefront Sensing and Control”. Proceedings of SPIE (2007) vol. 6687 (08). [5] Greenhouse, M. A., Drury , M. P., Dunn, J. L., Glazer, S. D., Greville, E

  15. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  16. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  17. Alignment of RNA molecules: Binding energy and statistical properties of random sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valba, O. V., E-mail: valbaolga@gmail.com; Nechaev, S. K., E-mail: sergei.nechaev@gmail.com; Tamm, M. V., E-mail: thumm.m@gmail.com

    2012-02-15

    A new statistical approach to the problem of pairwise alignment of RNA sequences is proposed. The problem is analyzed for a pair of interacting polymers forming an RNA-like hierarchical cloverleaf structures. An alignment is characterized by the numbers of matches, mismatches, and gaps. A weight function is assigned to each alignment; this function is interpreted as a free energy taking into account both direct monomer-monomer interactions and a combinatorial contribution due to formation of various cloverleaf secondary structures. The binding free energy is determined for a pair of RNA molecules. Statistical properties are discussed, including fluctuations of the binding energymore » between a pair of RNA molecules and loop length distribution in a complex. Based on an analysis of the free energy per nucleotide pair complexes of random RNAs as a function of the number of nucleotide types c, a hypothesis is put forward about the exclusivity of the alphabet c = 4 used by nature.« less

  18. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  19. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  20. Optical systems engineering - A tutorial

    NASA Technical Reports Server (NTRS)

    Wyman, C. L.

    1979-01-01

    The paper examines the use of the systems engineering approach in the design of optical systems, noting that the use of such an approach which involves an integrated interdisciplinary approach to the development of systems is most appropriate for optics. It is shown that the high precision character of optics leads to complex and subtle effects on optical system performance, resulting from structural, thermal dynamical, control system, and manufacturing and assembly considerations. Attention is given to communication problems that often occur among users and optical engineers due to the unique factors of optical systems. It is concluded that it is essential that the optics community provide leadership to resolve communication problems and fully formalize the field of optical systems engineering.

  1. THE `IN' AND THE `OUT': An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Medina-Martins, P. R.; Rocha, L.

    It is claimed that a great deal of the problems which the mechanist approaches to the emulation/modelling of the human mind are presently facing are due to a host of canons so `readily' accepted and acquainted that some of their underlying processes have not yet been the objective of intensive research. The essay proposes a (possible) solution for some of these problems introducing the tenets of a new paradigm which, based on a reappraisal of the concepts of purposiveness and teleology, lays emphasis on the evolutionary aspects (biological and mental) of alive beings. A complex neuro-fuzzy system which works as the supporting realization of this paradigm is briefly described in the essay.

  2. Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement

    ERIC Educational Resources Information Center

    Zheng, Robert; Cook, Anne

    2012-01-01

    The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…

  3. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  4. Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals

    PubMed Central

    Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.

    2016-01-01

    Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116

  5. Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.

  6. Engineering with uncertainty: monitoring air bag performance.

    PubMed

    Wetmore, Jameson M

    2008-06-01

    Modern engineering is complicated by an enormous number of uncertainties. Engineers know a great deal about the material world and how it works. But due to the inherent limits of testing and the complexities of the world outside the lab, engineers will never be able to fully predict how their creations will behave. One way the uncertainties of engineering can be dealt with is by actively monitoring technologies once they have left the development and production stage. This article uses an episode in the history of automobile air bags as an example of engineers who had the foresight and initiative to carefully track the technology on the road to discover problems as early as possible. Not only can monitoring help engineers identify problems that surface in the field, it can also assist them in their efforts to mobilize resources to resolve problem.

  7. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    NASA Astrophysics Data System (ADS)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  8. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  9. Dynamically Reconfigurable Approach to Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalie M.; Lewis, Robert Michael

    2003-01-01

    The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.

  10. Localization and Poincaré catastrophe in the problem of a photon scattering on a pair of Rayleigh particles

    NASA Astrophysics Data System (ADS)

    Maksimenko, V. V.; Zagaynov, V. A.; Agranovski, I. E.

    2013-11-01

    It is shown that complexities in a problem of elastic scattering of a photon on a pair of Rayleigh particles (two small metallic spheres) are similar to the complexities of the classic problem of three bodies in celestial mechanics. In the latter problem, as is well known, the phase trajectory of a system becomes a nonanalytical function of its variables. In our problem, the trajectory of a virtual photon at some frequency could be considered such as the well-known Antoine set (Antoine's necklace) or a chain with interlaced sections having zero topological dimension and fractal structure. Such a virtual “zero-dimensional” photon could be localized between the particles of the pair. The topology suppresses the photon's exit to the real world with dimensional equal-to-or-greater-than units. The physical reason for this type of photon localization is related to the “mechanical rigidity” of interlaced sections of the photon trajectory due to a singularity of energy density along these sections. Within the approximations used in this paper, the effect is possible if the frequency of the incident radiation is equal to double the frequency of the dipole surface plasmon in an isolated particle, which is the only character frequency in the problem. This condition and transformation of the photon trajectory to the zero-dimensional Antoine set reminds of some of the simplest variants of Poincaré's catastrophe in the dynamics of some nonintegrable systems. The influence of the localization on elastic light scattering by the pair is investigated.

  11. From problem solving to problem definition: scrutinizing the complex nature of clinical practice.

    PubMed

    Cristancho, Sayra; Lingard, Lorelei; Regehr, Glenn

    2017-02-01

    In medical education, we have tended to present problems as being singular, stable, and solvable. Problem solving has, therefore, drawn much of medical education researchers' attention. This focus has been important but it is limited in terms of preparing clinicians to deal with the complexity of the 21st century healthcare system in which they will provide team-based care for patients with complex medical illness. In this paper, we use the Soft Systems Engineering principles to introduce the idea that in complex, team-based situations, problems usually involve divergent views and evolve with multiple solution iterations. As such we need to shift the conversation from (1) problem solving to problem definition, and (2) from a problem definition derived exclusively at the level of the individual to a definition derived at the level of the situation in which the problem is manifested. Embracing such a focus on problem definition will enable us to advocate for novel educational practices that will equip trainees to effectively manage the problems they will encounter in complex, team-based healthcare.

  12. Quantum communication complexity using the quantum Zeno effect

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Anwer, Hammad; Hameedi, Alley; Bourennane, Mohamed

    2015-07-01

    The quantum Zeno effect (QZE) is the phenomenon in which the unitary evolution of a quantum state is suppressed, e.g., due to frequent measurements. Here, we investigate the use of the QZE in a class of communication complexity problems (CCPs). Quantum entanglement is known to solve certain CCPs beyond classical constraints. However, recent developments have yielded CCPs for which superclassical results can be obtained using only communication of a single d -level quantum state (qudit) as a resource. In the class of CCPs considered here, we show quantum reduction of complexity in three ways: using (i) entanglement and the QZE, (ii) a single qudit and the QZE, and (iii) a single qudit. We have performed a proof of concept experimental demonstrations of three party CCP protocol based on single-qubit communication with and without QZE.

  13. Study of Nanocomposites of Amino Acids and Organic Polyethers by Means of Mass Spectrometry and Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Zobnina, V. G.; Kosevich, M. V.; Chagovets, V. V.; Boryak, O. A.

    A problem of elucidation of structure of nanomaterials based on combination of proteins and polyether polymers is addressed on the monomeric level of single amino acids and oligomers of PEG-400 and OEG-5 polyethers. Efficiency of application of combined approach involving experimental electrospray mass spectrometry and computer modeling by molecular dynamics simulation is demonstrated. It is shown that oligomers of polyethers form stable complexes with amino acids valine, proline, histidine, glutamic, and aspartic acids. Molecular dynamics simulation has shown that stabilization of amino acid-polyether complexes is achieved due to winding of the polymeric chain around charged groups of amino acids. Structural motives revealed for complexes of single amino acids with polyethers can be realized in structures of protein-polyether nanoparticles currently designed for drug delivery.

  14. Physics, stability, and dynamics of supply networks

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Lämmer, Stefan; Seidel, Thomas; Šeba, Pétr; Płatkowski, Tadeusz

    2004-12-01

    We show how to treat supply networks as physical transport problems governed by balance equations and equations for the adaptation of production speeds. Although the nonlinear behavior is different, the linearized set of coupled differential equations is formally related to those of mechanical or electrical oscillator networks. Supply networks possess interesting features due to their complex topology and directed links. We derive analytical conditions for absolute and convective instabilities. The empirically observed “bullwhip effect” in supply chains is explained as a form of convective instability based on resonance effects. Moreover, it is generalized to arbitrary supply networks. Their related eigenvalues are usually complex, depending on the network structure (even without loops). Therefore, their generic behavior is characterized by damped or growing oscillations. We also show that regular distribution networks possess two negative eigenvalues only, but perturbations generate a spectrum of complex eigenvalues.

  15. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  16. Flow simulations about steady-complex and unsteady moving configurations using structured-overlapped and unstructured grids

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1995-01-01

    The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.

  17. A tight upper bound for quadratic knapsack problems in grid-based wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Quan, Ning; Kim, Harrison M.

    2018-03-01

    The 0-1 quadratic knapsack problem (QKP) in wind farm layout optimization models possible turbine locations as nodes, and power loss due to wake effects between pairs of turbines as edges in a complete graph. The goal is to select up to a certain number of turbine locations such that the sum of selected node and edge coefficients is maximized. Finding the optimal solution to the QKP is difficult in general, but it is possible to obtain a tight upper bound on the QKP's optimal value which facilitates the use of heuristics to solve QKPs by giving a good estimate of the optimality gap of any feasible solution. This article applies an upper bound method that is especially well-suited to QKPs in wind farm layout optimization due to certain features of the formulation that reduce the computational complexity of calculating the upper bound. The usefulness of the upper bound was demonstrated by assessing the performance of the greedy algorithm for solving QKPs in wind farm layout optimization. The results show that the greedy algorithm produces good solutions within 4% of the optimal value for small to medium sized problems considered in this article.

  18. Random Matrix Approach to Quantum Adiabatic Evolution Algorithms

    NASA Technical Reports Server (NTRS)

    Boulatov, Alexei; Smelyanskiy, Vadier N.

    2004-01-01

    We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.

  19. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  20. Information needs related to extension service and community outreach.

    PubMed

    Bottcher, Robert W

    2003-06-01

    Air quality affects everyone. Some people are affected by air quality impacts, regulations, and technological developments in several ways. Stakeholders include the medical community, ecologists, government regulators, industries, technology providers, academic professionals, concerned citizens, the news media, and elected officials. Each of these groups may perceive problems and opportunities differently, but all need access to information as it is developed. The diversity and complexity of air quality problems contribute to the challenges faced by extension and outreach professionals who must communicate with stakeholders having diverse backgrounds. Gases, particulates, biological aerosols, pathogens, and odors all require expensive and relatively complex technology to measure and control. Economic constraints affect the ability of regulators and others to measure air quality, and industry and others to control it. To address these challenges, while communicating air quality research results and concepts to stakeholders, three areas of information needs are evident. (1) A basic understanding of the fundamental concepts regarding air pollutants and their measurement and control is needed by all stakeholders; the Extension Specialist, to be effective, must help people move some distance up the learning curve. (2) Each problem or set of problems must be reasonably well defined since comprehensive solution of all problems simultaneously may not be feasible; for instance, the solution of an odor problem associated with animal production may not address atmospheric effects due to ammonia emissions. (3) The integrity of the communication process must be preserved by avoiding prejudice and protectionism; although stakeholders may seek to modify information to enhance their interests, extension and outreach professionals must be willing to present unwelcome information or admit to a lack of information. A solid grounding in fundamental concepts, careful and fair problem definition, and a resolute commitment to integrity and credibility will enable effective communication of air quality information to and among diverse stakeholders.

  1. Automatic Modulation Classification Based on Deep Learning for Unmanned Aerial Vehicles.

    PubMed

    Zhang, Duona; Ding, Wenrui; Zhang, Baochang; Xie, Chunyu; Li, Hongguang; Liu, Chunhui; Han, Jungong

    2018-03-20

    Deep learning has recently attracted much attention due to its excellent performance in processing audio, image, and video data. However, few studies are devoted to the field of automatic modulation classification (AMC). It is one of the most well-known research topics in communication signal recognition and remains challenging for traditional methods due to complex disturbance from other sources. This paper proposes a heterogeneous deep model fusion (HDMF) method to solve the problem in a unified framework. The contributions include the following: (1) a convolutional neural network (CNN) and long short-term memory (LSTM) are combined by two different ways without prior knowledge involved; (2) a large database, including eleven types of single-carrier modulation signals with various noises as well as a fading channel, is collected with various signal-to-noise ratios (SNRs) based on a real geographical environment; and (3) experimental results demonstrate that HDMF is very capable of coping with the AMC problem, and achieves much better performance when compared with the independent network.

  2. Automatic Modulation Classification Based on Deep Learning for Unmanned Aerial Vehicles

    PubMed Central

    Ding, Wenrui; Zhang, Baochang; Xie, Chunyu; Li, Hongguang; Liu, Chunhui; Han, Jungong

    2018-01-01

    Deep learning has recently attracted much attention due to its excellent performance in processing audio, image, and video data. However, few studies are devoted to the field of automatic modulation classification (AMC). It is one of the most well-known research topics in communication signal recognition and remains challenging for traditional methods due to complex disturbance from other sources. This paper proposes a heterogeneous deep model fusion (HDMF) method to solve the problem in a unified framework. The contributions include the following: (1) a convolutional neural network (CNN) and long short-term memory (LSTM) are combined by two different ways without prior knowledge involved; (2) a large database, including eleven types of single-carrier modulation signals with various noises as well as a fading channel, is collected with various signal-to-noise ratios (SNRs) based on a real geographical environment; and (3) experimental results demonstrate that HDMF is very capable of coping with the AMC problem, and achieves much better performance when compared with the independent network. PMID:29558434

  3. Challenge of biomechanics.

    PubMed

    Volokh, K Y

    2013-06-01

    The application of mechanics to biology--biomechanics--bears great challenges due to the intricacy of living things. Their dynamism, along with the complexity of their mechanical response (which in itself involves complex chemical, electrical, and thermal phenomena) makes it very difficult to correlate empirical data with theoretical models. This difficulty elevates the importance of useful biomechanical theories compared to other fields of engineering. Despite inherent imperfections of all theories, a well formulated theory is crucial in any field of science because it is the basis for interpreting observations. This is all-the-more vital, for instance, when diagnosing symptoms, or planning treatment to a disease. The notion of interpreting empirical data without theory is unscientific and unsound. This paper attempts to fortify the importance of biomechanics and invigorate research efforts for those engineers and mechanicians who are not yet involved in the field. It is not aimed here, however, to give an overview of biomechanics. Instead, three unsolved problems are formulated to challenge the readers. At the micro-scale, the problem of the structural organization and integrity of the living cell is presented. At the meso-scale, the enigma of fingerprint formation is discussed. At the macro-scale, the problem of predicting aneurysm ruptures is reviewed. It is aimed here to attract the attention of engineers and mechanicians to problems in biomechanics which, in the author's opinion, will dominate the development of engineering and mechanics in forthcoming years.

  4. Investigation of the complex electroviscous effects on electrolyte (single and multiphase) flow in porous medi.

    NASA Astrophysics Data System (ADS)

    Bolet, A. J. S.; Linga, G.; Mathiesen, J.

    2017-12-01

    Surface charge is an important control parameter for wall-bounded flow of electrolyte solution. The electroviscous effect has been studied theoretically in model geometries such as infinite capillaries. However, in more complex geometries a quantification of the electroviscous effect is a non-trival task due to strong non-linarites of the underlying equations. In general, one has to rely on numerical methods. Here we present numerical studies of the full three-dimensional steady state Stokes-Poisson-Nernst-Planck problem in order to model electrolyte transport in artificial porous samples. The simulations are performed using the finite element method. From the simulation, we quantity how the electroviscous effect changes the general flow permeability in complex three-dimensional porous media. The porous media we consider are mostly generated artificially by connecting randomly dispersed cylindrical pores. Furthermore, we present results of electric driven two-phase immiscible flow in two dimensions. The simulations are performed by augmenting the above equations with a phase field model to handle and track the interaction between the two fluids (using parameters corresponding to oil-water interfaces, where oil non-polar). In particular, we consider the electro-osmotic effect on imbibition due to charged walls and electrolyte-solution.

  5. Maximizing photovoltaic power generation of a space-dart configured satellite

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young; Cutler, James W.; Mancewicz, Joe; Ridley, Aaron J.

    2015-06-01

    Many small satellites are power constrained due to their minimal solar panel area and the eclipse environment of low-Earth orbit. As with larger satellites, these small satellites, including CubeSats, use deployable power arrays to increase power production. This presents a design opportunity to develop various objective functions related to energy management and methods for optimizing these functions over a satellite design. A novel power generation model was created, and a simulation system was developed to evaluate various objective functions describing energy management for complex satellite designs. The model uses a spacecraft-body-fixed spherical coordinate system to analyze the complex geometry of a satellite's self-induced shadowing with computation provided by the Open Graphics Library. As an example design problem, a CubeSat configured as a space-dart with four deployable panels is optimized. Due to the fast computation speed of the solution, an exhaustive search over the design space is used to find the solar panel deployment angles which maximize total power generation. Simulation results are presented for a variety of orbit scenarios. The method is extendable to a variety of complex satellite geometries and power generation systems.

  6. On the interrelation of multiplication and division in secondary school children.

    PubMed

    Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph

    2013-01-01

    Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types.

  7. Dielectric Anistropy and Elastic Constants Near the Nematic-Smectic A Transition

    NASA Astrophysics Data System (ADS)

    Visco, Angelo; Mahmood, Rizwan; Zapien, Donald

    The present work examines the behavior of dielectric anisotropy and the elastic constants associated with the deformation of liquid crystal molecules under the influence of an AC electric field and measured by an Automatic Liquid Crystal Tester (ALCT). The systems investigated are of various concentrations of 5CB (4-Cyano-4'-pentylbiphenyl) and 8CB (4-octyl-4'-cyanobiphenyl) liquid crystal as a function of temperature. These studies are important due to the complexity of the coupling between the orientational (nematic) and positional (smectic A) order parameters that can drive this transition to be either continuous or discontinuous. Theoretically, NA transition is weakly first order due to nematic director fluctuations in semctic A phase. This is similar to the transition from normal to superconductor. Thus, there exists a triple point similar to He3/He4 mixtures. Moreover, despite more than four decades of intense work, our understanding of this complex and interesting problem remains unclear. The funding for the project was provided by Slippery Rock University (2015-2016).

  8. Complex band structures of transition metal dichalcogenide monolayers with spin-orbit coupling effects

    NASA Astrophysics Data System (ADS)

    Szczęśniak, Dominik; Ennaoui, Ahmed; Ahzi, Saïd

    2016-09-01

    Recently, the transition metal dichalcogenides have attracted renewed attention due to the potential use of their low-dimensional forms in both nano- and opto-electronics. In such applications, the electronic and transport properties of monolayer transition metal dichalcogenides play a pivotal role. The present paper provides a new insight into these essential properties by studying the complex band structures of popular transition metal dichalcogenide monolayers (MX 2, where M  =  Mo, W; X  =  S, Se, Te) while including spin-orbit coupling effects. The conducted symmetry-based tight-binding calculations show that the analytical continuation from the real band structures to the complex momentum space leads to nonlinear generalized eigenvalue problems. Herein an efficient method for solving such a class of nonlinear problems is presented and yields a complete set of physically relevant eigenvalues. Solutions obtained by this method are characterized and classified into propagating and evanescent states, where the latter states manifest not only monotonic but also oscillatory decay character. It is observed that some of the oscillatory evanescent states create characteristic complex loops at the direct band gap of MX 2 monolayers, where electrons can directly tunnel between the band gap edges. To describe these tunneling currents, decay behavior of electronic states in the forbidden energy region is elucidated and their importance within the ballistic transport regime is briefly discussed.

  9. A longitudinal study of higher-order thinking skills: working memory and fluid reasoning in childhood enhance complex problem solving in adolescence.

    PubMed

    Greiff, Samuel; Wüstenberg, Sascha; Goetz, Thomas; Vainikainen, Mari-Pauliina; Hautamäki, Jarkko; Bornstein, Marc H

    2015-01-01

    Scientists have studied the development of the human mind for decades and have accumulated an impressive number of empirical studies that have provided ample support for the notion that early cognitive performance during infancy and childhood is an important predictor of later cognitive performance during adulthood. As children move from childhood into adolescence, their mental development increasingly involves higher-order cognitive skills that are crucial for successful planning, decision-making, and problem solving skills. However, few studies have employed higher-order thinking skills such as complex problem solving (CPS) as developmental outcomes in adolescents. To fill this gap, we tested a longitudinal developmental model in a sample of 2,021 Finnish sixth grade students (M = 12.41 years, SD = 0.52; 1,041 female, 978 male, 2 missing sex). We assessed working memory (WM) and fluid reasoning (FR) at age 12 as predictors of two CPS dimensions: knowledge acquisition and knowledge application. We further assessed students' CPS performance 3 years later as a developmental outcome (N = 1696; M = 15.22 years, SD = 0.43; 867 female, 829 male). Missing data partly occurred due to dropout and technical problems during the first days of testing and varied across indicators and time with a mean of 27.2%. Results revealed that FR was a strong predictor of both CPS dimensions, whereas WM exhibited only a small influence on one of the two CPS dimensions. These results provide strong support for the view that CPS involves FR and, to a lesser extent, WM in childhood and from there evolves into an increasingly complex structure of higher-order cognitive skills in adolescence.

  10. A longitudinal study of higher-order thinking skills: working memory and fluid reasoning in childhood enhance complex problem solving in adolescence

    PubMed Central

    Greiff, Samuel; Wüstenberg, Sascha; Goetz, Thomas; Vainikainen, Mari-Pauliina; Hautamäki, Jarkko; Bornstein, Marc H.

    2015-01-01

    Scientists have studied the development of the human mind for decades and have accumulated an impressive number of empirical studies that have provided ample support for the notion that early cognitive performance during infancy and childhood is an important predictor of later cognitive performance during adulthood. As children move from childhood into adolescence, their mental development increasingly involves higher-order cognitive skills that are crucial for successful planning, decision-making, and problem solving skills. However, few studies have employed higher-order thinking skills such as complex problem solving (CPS) as developmental outcomes in adolescents. To fill this gap, we tested a longitudinal developmental model in a sample of 2,021 Finnish sixth grade students (M = 12.41 years, SD = 0.52; 1,041 female, 978 male, 2 missing sex). We assessed working memory (WM) and fluid reasoning (FR) at age 12 as predictors of two CPS dimensions: knowledge acquisition and knowledge application. We further assessed students’ CPS performance 3 years later as a developmental outcome (N = 1696; M = 15.22 years, SD = 0.43; 867 female, 829 male). Missing data partly occurred due to dropout and technical problems during the first days of testing and varied across indicators and time with a mean of 27.2%. Results revealed that FR was a strong predictor of both CPS dimensions, whereas WM exhibited only a small influence on one of the two CPS dimensions. These results provide strong support for the view that CPS involves FR and, to a lesser extent, WM in childhood and from there evolves into an increasingly complex structure of higher-order cognitive skills in adolescence. PMID:26283992

  11. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  12. [Patient-related complexity in nursing care - Collective case studies in the acute care hospital].

    PubMed

    Gurtner, Caroline; Spirig, Rebecca; Staudacher, Diana; Huber, Evelyn

    2018-06-04

    Patient-related complexity in nursing care - Collective case studies in the acute care hospital Abstract. Patient-related complexity of nursing is defined by the three characteristics "instability", "uncertainty", and "variability". Complexity increased in the past years, due to reduced hospital length of stay and a growing number of patients with chronic and multiple diseases. We investigated the phenomenon of patient-related complexity from the point of view of nurses and clinical nurse specialists in an acute care hospital. In the context of a collective case study design, nurses and clinical nurse specialists assessed the complexity of nursing situations with a questionnaire. Subsequently, we interviewed nurses and clinical nurse specialists about their evaluation of patient-related complexity. In a within-case-analysis we summarized data inductively to create case narratives. By means of a cross-case-analysis we compared the cases with regard to deductively derived characteristics. The four cases exemplarily showed that the degree of complexity depends on the controllability and predictability of clinical problems. Additionally, complexity increases or decreases, according to patients' individual resources. Complex patient situations demand professional expertise, experience, communicative competencies and the ability for reflection. Beginner nurses would benefit from support and advice by experienced nurses to develop these skills.

  13. Toward Modeling the Intrinsic Complexity of Test Problems

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  14. Investigating the Effect of Complexity Factors in Stoichiometry Problems Using Logistic Regression and Eye Tracking

    ERIC Educational Resources Information Center

    Tang, Hui; Kirk, John; Pienta, Norbert J.

    2014-01-01

    This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…

  15. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  16. Multigrid Methods for Aerodynamic Problems in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1995-01-01

    Work has been directed at the development of efficient multigrid methods for the solution of aerodynamic problems involving complex geometries, including the development of computational methods for the solution of both inviscid and viscous transonic flow problems. The emphasis is on problems of complex, three-dimensional geometry. The methods developed are based upon finite-volume approximations to both the Euler and the Reynolds-Averaged Navier-Stokes equations. The methods are developed for use on multi-block grids using diagonalized implicit multigrid methods to achieve computational efficiency. The work is focused upon aerodynamic problems involving complex geometries, including advanced engine inlets.

  17. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  18. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  19. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  20. The Retrospective Iterated Analysis Scheme for Nonlinear Chaotic Dynamics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2002-01-01

    Atmospheric data assimilation is the name scientists give to the techniques of blending atmospheric observations with atmospheric model results to obtain an accurate idea of what the atmosphere looks like at any given time. Because two pieces of information are used, observations and model results, the outcomes of data assimilation procedure should be better than what one would get by using one of these two pieces of information alone. There is a number of different mathematical techniques that fall under the data assimilation jargon. In theory most these techniques accomplish about the same thing. In practice, however, slight differences in the approaches amount to faster algorithms in some cases, more economical algorithms in other cases, and even give better overall results in yet some other cases because of practical uncertainties not accounted for by theory. Therefore, the key is to find the most adequate data assimilation procedure for the problem in hand. In our Data Assimilation group we have been doing extensive research to try and find just such data assimilation procedure. One promising possibility is what we call retrospective iterated analysis (RIA) scheme. This procedure has recently been implemented and studied in the context of a very large data assimilation system built to help predict and study weather and climate. Although the results from that study suggest that the RIA scheme produces quite reasonable results, a complete evaluation of the scheme is very difficult due to the complexity of that problem. The present work steps back a little bit and studies the behavior of the RIA scheme in the context of a small problem. The problem is small enough to allow full assessment of the quality of the RIA scheme, but it still has some of the complexity found in nature, namely, its chaotic-type behavior. We find that the RIA performs very well for this small but still complex problem which is a result that seconds the results of our early studies.

  1. Increase in the efficiency of a high-speed ramjet on hydrocarbon fuel at the flying vehicle acceleration up to M = 6+

    NASA Astrophysics Data System (ADS)

    Abashev, V. M.; Korabelnikov, A. V.; Kuranov, A. L.; Tretyakov, P. K.

    2017-10-01

    At the analysis of the work process in a ramjet, a complex consideration of the ensemble of problems the solution of which determines the engine efficiency appears reasonable. The main problems are ensuring a high completeness of fuel combustion and minimal hydraulic losses, the reliability of cooling of high-heat areas with the use of the fuel cooling resource, and ensuring the strength of the engine duct elements under non-uniform heat loads due to fuel combustion in complex gas-dynamic flow structures. The fundamental techniques and approaches to the solution of above-noted problems are considered in the present report, their novelty and advantages in comparison with conventional techniques are substantiated. In particular, a technique of the arrangement of an intense (pre-detonation) combustion regime for ensuring a high completeness of fuel combustion and minimal hydraulic losses at a smooth deceleration of a supersonic flow down to the sound velocity using the pulsed-periodic gas-dynamic flow control has been proposed. A technique has been proposed for cooling the high-heat areas, which employs the cooling resource of the hydrocarbon fuel, including the process of the kerosene chemical transformation (conversion) using the nano-catalysts. An analysis has shown that the highly heated structure will operate in the elastic-plastic domain of the behavior of constructional materials, which is directly connected to the engine operation resource. There arise the problems of reducing the ramjet shells depending on deformations. The deformations also lead to a significant influence on the work process in the combustor and, naturally, on the heat transfer process and the performance of catalysts (the action of plastic and elastic deformations of restrained shells). The work presents some results illustrating the presence of identified problems. A conclusion is drawn about the necessity of formulating a complex investigation both with the realization in model experiments and execution of computational and theoretical investigations.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  3. Size does Matter

    NASA Astrophysics Data System (ADS)

    Vespignani, Alessandro

    From schools of fish and flocks of birds, to digital networks and self-organizing biopolymers, our understanding of spontaneously emergent phenomena, self-organization, and critical behavior is in large part due to complex systems science. The complex systems approach is indeed a very powerful conceptual framework to shed light on the link between the microscopic dynamical evolution of the basic elements of the system and the emergence of oscopic phenomena; often providing evidence for mathematical principles that go beyond the particulars of the individual system, thus hinting to general modeling principles. By killing the myth of the ant queen and shifting the focus on the dynamical interaction across the elements of the systems, complex systems science has ushered our way into the conceptual understanding of many phenomena at the core of major scientific and social challenges such as the emergence of consensus, social opinion dynamics, conflicts and cooperation, contagion phenomena. For many years though, these complex systems approaches to real-world problems were often suffering from being oversimplified and not grounded on actual data...

  4. Complex biomarker discovery in neuroimaging data: Finding a needle in a haystack☆

    PubMed Central

    Atluri, Gowtham; Padmanabhan, Kanchana; Fang, Gang; Steinbach, Michael; Petrella, Jeffrey R.; Lim, Kelvin; MacDonald, Angus; Samatova, Nagiza F.; Doraiswamy, P. Murali; Kumar, Vipin

    2013-01-01

    Neuropsychiatric disorders such as schizophrenia, bipolar disorder and Alzheimer's disease are major public health problems. However, despite decades of research, we currently have no validated prognostic or diagnostic tests that can be applied at an individual patient level. Many neuropsychiatric diseases are due to a combination of alterations that occur in a human brain rather than the result of localized lesions. While there is hope that newer imaging technologies such as functional and anatomic connectivity MRI or molecular imaging may offer breakthroughs, the single biomarkers that are discovered using these datasets are limited by their inability to capture the heterogeneity and complexity of most multifactorial brain disorders. Recently, complex biomarkers have been explored to address this limitation using neuroimaging data. In this manuscript we consider the nature of complex biomarkers being investigated in the recent literature and present techniques to find such biomarkers that have been developed in related areas of data mining, statistics, machine learning and bioinformatics. PMID:24179856

  5. Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.

    1981-01-01

    Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.

  6. High-performance liquid chromatography analysis of plant saponins: An update 2005-2010

    PubMed Central

    Negi, Jagmohan S.; Singh, Pramod; Pant, Geeta Joshi Nee; Rawat, M. S. M.

    2011-01-01

    Saponins are widely distributed in plant kingdom. In view of their wide range of biological activities and occurrence as complex mixtures, saponins have been purified and separated by high-performance liquid chromatography using reverse-phase columns at lower wavelength. Mostly, saponins are not detected by ultraviolet detector due to lack of chromophores. Electrospray ionization mass spectrometry, diode array detector , evaporative light scattering detection, and charged aerosols have been used for overcoming the detection problem of saponins. PMID:22303089

  7. Reduced Gravity Gas and Liquid Flows: Simple Data for Complex Problems

    NASA Technical Reports Server (NTRS)

    McQuillen, John; Motil, Brian

    2001-01-01

    While there have been many studies for two-phase flow through straight cylindrical tubes, more recently, a new group of studies have emerged that examine two-phase flow through non-straight, non-cylindrical geometries, including expansions, contractions, tees, packed beds and cyclonic separation devices. Although these studies are still, relatively speaking, in their infancy, they have provided valuable information regarding the importance of the flow momentum, and the existence of liquid dryout due to sharp comers in microgravity.

  8. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  9. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  10. Exploring biological interaction networks with tailored weighted quasi-bicliques

    PubMed Central

    2012-01-01

    Background Biological networks provide fundamental insights into the functional characterization of genes and their products, the characterization of DNA-protein interactions, the identification of regulatory mechanisms, and other biological tasks. Due to the experimental and biological complexity, their computational exploitation faces many algorithmic challenges. Results We introduce novel weighted quasi-biclique problems to identify functional modules in biological networks when represented by bipartite graphs. In difference to previous quasi-biclique problems, we include biological interaction levels by using edge-weighted quasi-bicliques. While we prove that our problems are NP-hard, we also describe IP formulations to compute exact solutions for moderately sized networks. Conclusions We verify the effectiveness of our IP solutions using both simulation and empirical data. The simulation shows high quasi-biclique recall rates, and the empirical data corroborate the abilities of our weighted quasi-bicliques in extracting features and recovering missing interactions from biological networks. PMID:22759421

  11. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  12. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  13. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.

    PubMed

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.

  14. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  15. Numerical Analysis on Seepage in the deep overburden CFRD

    NASA Astrophysics Data System (ADS)

    Zeyu, GUO; Junrui, CHAI; Yuan, QIN

    2017-12-01

    There are many problems in the construction of hydraulic structures on deep overburden because of its complex foundation structure and poor geological condition. Seepage failure is one of the main problems. The Combination of the seepage control system of the face rockfill dam and the deep overburden can effectively control the seepage of construction of the concrete face rockfill dam on the deep overburden. Widely used anti-seepage measures are horizontal blanket, waterproof wall, curtain grouting and so on, but the method, technique and its effect of seepage control still have many problems thus need further study. Due to the above considerations, Three-dimensional seepage field numerical analysis based on practical engineering case is conducted to study the seepage prevention effect under different seepage prevention methods, which is of great significance to the development of dam technology and the development of hydropower resources in China.

  16. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  17. Fiber tracking of brain white matter based on graph theory.

    PubMed

    Lu, Meng

    2015-01-01

    Brain white matter tractography is reconstructed via diffusion-weighted magnetic resonance images. Due to the complex structure of brain white matter fiber bundles, fiber crossing and fiber branching are abundant in human brain. And regular methods with diffusion tensor imaging (DTI) can't accurately handle this problem. the biggest problems of the brain tractography. Therefore, this paper presented a novel brain white matter tractography method based on graph theory, so the fiber tracking between two voxels is transformed into locating the shortest path in a graph. Besides, the presented method uses Q-ball imaging (QBI) as the source data instead of DTI, because QBI can provide accurate information about multiple fiber crossing and branching in one voxel using orientation distribution function (ODF). Experiments showed that the presented method can accurately handle the problem of brain white matter fiber crossing and branching, and reconstruct brain tractograhpy both in phantom data and real brain data.

  18. Optimal matching for prostate brachytherapy seed localization with dimension reduction.

    PubMed

    Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L

    2009-01-01

    In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.

  19. Leaky GFD problems

    NASA Astrophysics Data System (ADS)

    Chumakova, Lyubov; Rzeznik, Andrew; Rosales, Rodolfo R.

    2017-11-01

    In many dispersive/conservative wave problems, waves carry energy outside of the domain of interest and never return. Inside the domain of interest, this wave leakage acts as an effective dissipation mechanism, causing solutions to decay. In classical geophysical fluid dynamics problems this scenario occurs in the troposphere, if one assumes a homogeneous stratosphere. In this talk we present several classic GFD problems, where we seek the solution in the troposphere alone. Assuming that upward propagating waves that reach the stratosphere never return, we demonstrate how classic baroclinic modes become leaky, with characteristic decay time-scales that can be calculated. We also show how damping due to wave leakage changes the classic baroclinic instability problem in the presence of shear. This presentation is a part of a joint project. The mathematical approach used here relies on extending the classical concept of group velocity to leaky waves with complex wavenumber and frequency, which will be presented at this meeting by A. Rzeznik in the talk ``Group Velocity for Leaky Waves''. This research is funded by the Royal Soc. of Edinburgh, Scottish Government, and NSF.

  20. AnchorDock for Blind Flexible Docking of Peptides to Proteins.

    PubMed

    Slutzki, Michal; Ben-Shimon, Avraham; Niv, Masha Y

    2017-01-01

    Due to increasing interest in peptides as signaling modulators and drug candidates, several methods for peptide docking to their target proteins are under active development. The "blind" docking problem, where the peptide-binding site on the protein surface is unknown, presents one of the current challenges in the field. AnchorDock protocol was developed by Ben-Shimon and Niv to address this challenge.This protocol narrows the docking search to the most relevant parts of the conformational space. This is achieved by pre-folding the free peptide and by computationally detecting anchoring spots on the surface of the unbound protein. Multiple flexible simulated annealing molecular dynamics (SAMD) simulations are subsequently carried out, starting from pre-folded peptide conformations, constrained to the various precomputed anchoring spots.Here, AnchorDock is demonstrated using two known protein-peptide complexes. A PDZ-peptide complex provides a relatively easy case due to the relatively small size of the protein, and a typical peptide conformation and binding region; a more challenging example is a complex between USP7 N-term and a p53-derived peptide, where the protein is larger, and the peptide conformation and a binding site are generally assumed to be unknown. AnchorDock returned native-like solutions ranked first and third for the PDZ and USP7 complexes, respectively. We describe the procedure step by step and discuss possible modifications where applicable.

  1. Identifying and characterizing key nodes among communities based on electrical-circuit networks.

    PubMed

    Zhu, Fenghui; Wang, Wenxu; Di, Zengru; Fan, Ying

    2014-01-01

    Complex networks with community structures are ubiquitous in the real world. Despite many approaches developed for detecting communities, we continue to lack tools for identifying overlapping and bridging nodes that play crucial roles in the interactions and communications among communities in complex networks. Here we develop an algorithm based on the local flow conservation to effectively and efficiently identify and distinguish the two types of nodes. Our method is applicable in both undirected and directed networks without a priori knowledge of the community structure. Our method bypasses the extremely challenging problem of partitioning communities in the presence of overlapping nodes that may belong to multiple communities. Due to the fact that overlapping and bridging nodes are of paramount importance in maintaining the function of many social and biological networks, our tools open new avenues towards understanding and controlling real complex networks with communities accompanied with the key nodes.

  2. Computational intelligence in bioinformatics: SNP/haplotype data in genetic association study for common diseases.

    PubMed

    Kelemen, Arpad; Vasilakos, Athanasios V; Liang, Yulan

    2009-09-01

    Comprehensive evaluation of common genetic variations through association of single-nucleotide polymorphism (SNP) structure with common complex disease in the genome-wide scale is currently a hot area in human genome research due to the recent development of the Human Genome Project and HapMap Project. Computational science, which includes computational intelligence (CI), has recently become the third method of scientific enquiry besides theory and experimentation. There have been fast growing interests in developing and applying CI in disease mapping using SNP and haplotype data. Some of the recent studies have demonstrated the promise and importance of CI for common complex diseases in genomic association study using SNP/haplotype data, especially for tackling challenges, such as gene-gene and gene-environment interactions, and the notorious "curse of dimensionality" problem. This review provides coverage of recent developments of CI approaches for complex diseases in genetic association study with SNP/haplotype data.

  3. Supramolecular complexation for environmental control.

    PubMed

    Albelda, M Teresa; Frías, Juan C; García-España, Enrique; Schneider, Hans-Jörg

    2012-05-21

    Supramolecular complexes offer a new and efficient way for the monitoring and removal of many substances emanating from technical processes, fertilization, plant and animal protection, or e.g. chemotherapy. Such pollutants range from toxic or radioactive metal ions and anions to chemical side products, herbicides, pesticides to drugs including steroids, and include degradation products from natural sources. The applications involve usually fast and reversible complex formation, due to prevailing non-covalent interactions. This is of importance for sensing as well as for separation techniques, where the often expensive host compounds can then be reused almost indefinitely. Immobilization of host compounds, e.g. on exchange resins or on membranes, and their implementation in smart new materials hold particular promise. The review illustrates how the design of suitable host compounds in combination with modern sensing and separation methods can contribute to solve some of the biggest problems facing chemistry, which arise from the everyday increasing pollution of the environment.

  4. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  5. Experimental econophysics: Complexity, self-organization, and emergent properties

    NASA Astrophysics Data System (ADS)

    Huang, J. P.

    2015-03-01

    Experimental econophysics is concerned with statistical physics of humans in the laboratory, and it is based on controlled human experiments developed by physicists to study some problems related to economics or finance. It relies on controlled human experiments in the laboratory together with agent-based modeling (for computer simulations and/or analytical theory), with an attempt to reveal the general cause-effect relationship between specific conditions and emergent properties of real economic/financial markets (a kind of complex adaptive systems). Here I review the latest progress in the field, namely, stylized facts, herd behavior, contrarian behavior, spontaneous cooperation, partial information, and risk management. Also, I highlight the connections between such progress and other topics of traditional statistical physics. The main theme of the review is to show diverse emergent properties of the laboratory markets, originating from self-organization due to the nonlinear interactions among heterogeneous humans or agents (complexity).

  6. Generalization of Water Pricing Model in Agriculture and Domestic Groundwater for Water Sustainability and Conservation

    NASA Astrophysics Data System (ADS)

    Hek, Tan Kim; Fadzli Ramli, Mohammad; Iryanto; Rohana Goh, Siti; Zaki, Mohd Faiz M.

    2018-03-01

    The water requirement greatly increased due to population growth, increased agricultural areas and industrial development, thus causing high water demand. The complex problems facing by country is water pricing is not designed optimally as a staple of human needs and on the other hand also cannot guarantee the maintenance and distribution of water effectively. The cheap water pricing caused increase of water use and unmanageable water resource. Therefore, the more optimal water pricing as an effective control of water policy is needed for the sake of ensuring water resources conservation and sustainability. This paper presents the review on problems, issues and mathematical modelling of water pricing based on agriculture and domestic groundwater for water sustainability and conservation.

  7. Distributed computation: the new wave of synthetic biology devices.

    PubMed

    Macía, Javier; Posas, Francesc; Solé, Ricard V

    2012-06-01

    Synthetic biology (SB) offers a unique opportunity for designing complex molecular circuits able to perform predefined functions. But the goal of achieving a flexible toolbox of reusable molecular components has been shown to be limited due to circuit unpredictability, incompatible parts or random fluctuations. Many of these problems arise from the challenges posed by engineering the molecular circuitry: multiple wires are usually difficult to implement reliably within one cell and the resulting systems cannot be reused in other modules. These problems are solved by means of a nonstandard approach to single cell devices, using cell consortia and allowing the output signal to be distributed among different cell types, which can be combined in multiple, reusable and scalable ways. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Systems Engineering Awareness

    NASA Technical Reports Server (NTRS)

    Lucero, John

    2016-01-01

    The presentation will provide an overview of the fundamentals and principles of Systems Engineering (SE). This includes understanding the processes that are used to assist the engineer in a successful design, build and implementation of solutions. The context of this presentation will be to describe the involvement of SE throughout the life-cycle of a project from cradle to grave. Due to the ever growing number of complex technical problems facing our world, a Systems Engineering approach is desirable for many reasons. The interdisciplinary technical structure of current systems, technical processes representing System Design, Technical Management and Product Realization are instrumental in the development and integration of new technologies into mainstream applications. This tutorial will demonstrate the application of SE tools to these types of problems..

  9. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  10. Sprocket- Chain Simulation: Modelling and Simulation of a Multi Physics problem by sequentially coupling MotionSolve and nanoFluidX

    NASA Astrophysics Data System (ADS)

    Jayanthi, Aditya; Coker, Christopher

    2016-11-01

    In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.

  11. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  12. Mechanochemical Preparation of Stable Sub-100 nm γ-Cyclodextrin:Buckminsterfullerene (C60) Nanoparticles by Electrostatic or Steric Stabilization.

    PubMed

    Van Guyse, Joachim F R; de la Rosa, Victor R; Hoogenboom, Richard

    2018-02-21

    Buckminster fullerene (C 60 )'s main hurdle to enter the field of biomedicine is its low bioavailability, which results from its extremely low water solubility. A well-known approach to increase the water solubility of C 60 is by complexation with γ-cyclodextrins. However, the formed complexes are not stable in time as they rapidly aggregate and eventually precipitate due to attractive intermolecular forces, a common problem in inclusion complexes of cyclodextrins. In this study we attempt to overcome the attractive intermolecular forces between the complexes by designing custom γ-cyclodextrin (γCD)-based supramolecular hosts for C 60 that inhibit the aggregation found in native γCD-C 60 complexes. The approach entails the introduction of either repulsive electrostatic forces or increased steric hindrance to prevent aggregation, thus enhancing the biomedical application potential of C 60 . These modifications have led to new sub-100 nm nanostructures that show long-term stability in solution. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. [Multiple colonic anastomoses in the surgical treatment of short bowel syndrome. A new technique].

    PubMed

    Robledo-Ogazón, Felipe; Becerril-Martínez, Guillermo; Hernández-Saldaña, Víctor; Zavala-Aznar, Marí Luisa; Bojalil-Durán, Luis

    2008-01-01

    Some surgical pathologies eventually require intestinal resection. This may lead to an extended procedure such as leaving 30 cm of proximal jejunum and left and sigmoid colon. One of the most important consequences of this type of resection is "intestinal failure" or short bowel syndrome. This complex syndrome leads to different metabolic and water and acid/base imbalances, as well as nutritional and immunological challenges along with the problem accompanying an abdomen subjected to many surgical procedures and high mortality. Many surgical techniques have been developed to improve quality of life of patients. We designed a non-transplant surgical approach and performed the procedure on two patients with postoperative short bowel syndrome with <40 cm of proximal jejunum and left colon. There are a variety of non-transplant surgical procedures that, due to their complex technique or high mortality rate, have not resolved this important problem. However, the technique we present in this work can be performed by a large number of surgeons. The procedure has a low morbimortality rate and offers the opportunity for better control of metabolic and acid/base balance, intestinal transit and proper nutrition. We consider that this technique offers a new alternative for the complex management required by patients with short bowel syndrome and facilitates their long-term nutritional control.

  14. The electronic and transport properties of monolayer transition metal dichalcogenides: a complex band structure analysis

    NASA Astrophysics Data System (ADS)

    Szczesniak, Dominik

    Recently, monolayer transition metal dichalcogenides have attracted much attention due to their potential use in both nano- and opto-electronics. In such applications, the electronic and transport properties of group-VIB transition metal dichalcogenides (MX2 , where M=Mo, W; X=S, Se, Te) are particularly important. Herein, new insight into these properties is presented by studying the complex band structures (CBS's) of MX2 monolayers while accounting for spin-orbit coupling effects. By using the symmetry-based tight-binding model a nonlinear generalized eigenvalue problem for CBS's is obtained. An efficient method for solving such class of problems is presented and gives a complete set of physically relevant solutions. Next, these solutions are characterized and classified into propagating and evanescent states, where the latter states present not only monotonic but also oscillatory decay character. It is observed that some of the oscillatory evanescent states create characteristic complex loops at the direct band gaps, which describe the tunneling currents in the MX2 materials. The importance of CBS's and tunneling currents is demonstrated by the analysis of the quantum transport across MX2 monolayers within phase field matching theory. Present work has been prepared within the Qatar Energy and Environment Research Institute (QEERI) grand challenge ATHLOC project (Project No. QEERI- GC-3008).

  15. Reliable low precision simulations in land surface models

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  16. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  17. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert

    PubMed Central

    Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.

    2010-01-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726

  18. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert.

    PubMed

    Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M

    2010-11-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.

  19. Preparing new nurses with complexity science and problem-based learning.

    PubMed

    Hodges, Helen F

    2011-01-01

    Successful nurses function effectively with adaptability, improvability, and interconnectedness, and can see emerging and unpredictable complex problems. Preparing new nurses for complexity requires a significant change in prevalent but dated nursing education models for rising graduates. The science of complexity coupled with problem-based learning and peer review contributes a feasible framework for a constructivist learning environment to examine real-time systems data; explore uncertainty, inherent patterns, and ambiguity; and develop skills for unstructured problem solving. This article describes a pilot study of a problem-based learning strategy guided by principles of complexity science in a community clinical nursing course. Thirty-five senior nursing students participated during a 3-year period. Assessments included peer review, a final project paper, reflection, and a satisfaction survey. Results were higher than expected levels of student satisfaction, increased breadth and analysis of complex data, acknowledgment of community as complex adaptive systems, and overall higher level thinking skills than in previous years. 2011, SLACK Incorporated.

  20. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  1. Addressing Complex Challenges through Adaptive Leadership: A Promising Approach to Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Nelson, Tenneisha; Squires, Vicki

    2017-01-01

    Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…

  2. Mechanical Clogging Processes in Unconsolidated Porous Media Near Pumping Wells

    NASA Astrophysics Data System (ADS)

    de Zwart, B.; Schotting, R.; Hassanizadeh, M.

    2003-12-01

    In the Netherlands water supply companies produce over more than one billion cubic meters of drinking water every year. About 2500 water wells are used to pump up the groundwater from aquifers in the Dutch subsurface. More than 50% of these wells will encounter a number of technical problems during their lifetime. The main problem is the decrease in capacity due to well clogging. Clogging shows up after a number of operation years and results in extra, expensive cleaning operations and in early replacement of the pumping wells. This problem has been acknowledged by other industries, for example the metal, petroleum, beer industry and underground storage projects. Well clogging is the result of a number of interacting mechanisms creating a complex problem in the subsurface. In most clogging cases mechanical mechanisms are involved. A large number of studies have been performed to comprehend these processes. Investigations on mechanical processes are focused on transport of small particles through pores and deposition of particles due to physical or physical-chemical processes. After a period of deposition the particles plug the pores and decrease the permeability of the medium. Particle deposition in porous media is usually modelled using filtration theory. In order to get the dynamics of clogging this theory is not sufficient. The porous media is continuously altered due to deposition and mobilization. Therefore the capture characteristics will also continuously change and deposition rates will change in time. A new formula is derived to describe (re)mobilization of particles and allow changing deposition rates. This approach incorporates detachment and reattachment of deposited particles. This work also includes derivation of the filtration theory in radial coordinates. A comparison between the radial filtration theory and the new formula will be shown.

  3. Effect of space structures against development of transport infrastructure in Banda Aceh by using the concept of transit oriented development

    NASA Astrophysics Data System (ADS)

    Noer, Fadhly; Matondang, A. Rahim; Sirojuzilam, Saleh, Sofyan M.

    2017-11-01

    Due to the shifting of city urban development causing the shift of city services center, so there is a change in space pattern and space structure in Banda Aceh, then resulting urban sprawl which can lead to congestion problem occurs on the arterial road in Banda Aceh, it can be seen from the increasing number of vehicles per year by 6%. Another issue occurs by urban sprawl is not well organized of settlement due to the uncontrolled use of space so that caused grouping or the differences in socioeconomic strata that can impact to the complexity of population mobility problem. From this background problem considered to be solved by a concept that is Transit Oriented Development (TOD), that is a concept of transportation development in co-operation with spatial. This research will get the model of transportation infrastructure development with TOD concept that can handle transportation problem in Banda Aceh, due to change of spatial structure, and to find whether TOD concept can use for the area that has a population in medium density range. The result that is obtained equation so the space structure is: Space Structure = 0.520 + 0.206X3 + 0.264X6 + 0.100X7 and Transportation Infrastructure Development = -1.457 + 0.652X1 + 0.388X5 + 0.235X6 + 0.222X7 + 0.327X8, So results obtained with path analysis method obtained variable influences, node ratio, network connectivity, travel frequency, travel destination, travel cost, and travel time, it has a lower value when direct effect with transportation infrastructure development, but if the indirect effect through the structure of space has a greater influence, can be seen from spatial structure path scheme - transportation infrastructure development.

  4. Linked shoulder replacement: current design problems and a new design proposal.

    PubMed

    Mohammed, Ali Abdullah; Frostick, Simon Peter

    2016-04-01

    Totally constrained shoulder replacement with linked components is one of the surgical options in post-tumor resection shoulder reconstruction or in complex shoulder revision operations. In this paper, we intend to shed light on such an implant design, which provides a linked constrained connection between the humeral head and the glenoid, and to show some immediate postoperative complications, implant progression to decrease the chances of implant mechanical postinsertion failure, and a new design proposal. In our center, we use the linked prosthesis in complex revision situations; however, there have been some complications, which could be attributed mainly to the engineering and the implant design, and hence potentially avoidable by making a different design to cover for those mechanical issues. Two such complications are described in this paper. Early revisions after linked shoulder replacement implantation were needed in two occasions due to implant disconnection: one of them was due to dislodgement from the native glenoid, and the second one was due to the disengagement of the ringlet which secures the linkage mechanism between the humeral head and the implanted glenoid shell. There is a need for a more stable design construct to avoid the reported complications that needed early revision surgeries. The new design proposed is an attempt to help providing a better and more stable implant to decrease the chances of revision in those complex situations where the patient already had many major operations, and working to increase the durability of the implant is crucial.

  5. Synchronization with propagation - The functional differential equations

    NASA Astrophysics Data System (ADS)

    Rǎsvan, Vladimir

    2016-06-01

    The structure represented by one or several oscillators couple to a one-dimensional transmission environment (e.g. a vibrating string in the mechanical case or a lossless transmission line in the electrical case) turned to be attractive for the research in the field of complex structures and/or complex behavior. This is due to the fact that such a structure represents some generalization of various interconnection modes with lumped parameters for the oscillators. On the other hand the lossless and distortionless propagation along transmission lines has generated several research in electrical, thermal, hydro and control engineering leading to the association of some functional differential equations to the basic initial boundary value problems. The present research is performed at the crossroad of the aforementioned directions. We shall associate to the starting models some functional differential equations - in most cases of neutral type - and make use of the general theorems for existence and stability of forced oscillations for functional differential equations. The challenges introduced by the analyzed problems for the general theory are emphasized, together with the implication of the results for various applications.

  6. Exploring recruitment issues in stroke research: a qualitative study of nurse researchers' experiences.

    PubMed

    Boxall, Leigh; Hemsley, Anthony; White, Nicola

    2016-05-01

    To explore the practice of experienced stroke nurse researchers to understand the issues they face in recruiting participants. Participant recruitment is one of the greatest challenges in conducting clinical research, with many trials failing due to recruitment problems. Stroke research is a particularly difficult area in which to recruit; however various strategies can improve participation. Analysis revealed three main types of problems for recruiting participants to stroke research: those related to patients, those related to the nurse researcher, and those related to the study itself. Impairments affecting capacity to consent, the acute recruitment time frame of most stroke trials, paternalism by nurse researchers, and low public awareness were especially pertinent. The disabling nature of a stroke, which often includes functional and cognitive impairments, and the acute stage of illness at which patients are appropriate for many trials, make recruiting patients particularly complex and challenging. An awareness of the issues surrounding the recruitment of stroke patients may help researchers in designing and conducting trials. Future work is needed to address the complexities of obtaining informed consent when patient capacity is compromised.

  7. The prediction of crystal structure by merging knowledge methods with first principles quantum mechanics

    NASA Astrophysics Data System (ADS)

    Ceder, Gerbrand

    2007-03-01

    The prediction of structure is a key problem in computational materials science that forms the platform on which rational materials design can be performed. Finding structure by traditional optimization methods on quantum mechanical energy models is not possible due to the complexity and high dimensionality of the coordinate space. An unusual, but efficient solution to this problem can be obtained by merging ideas from heuristic and ab initio methods: In the same way that scientist build empirical rules by observation of experimental trends, we have developed machine learning approaches that extract knowledge from a large set of experimental information and a database of over 15,000 first principles computations, and used these to rapidly direct accurate quantum mechanical techniques to the lowest energy crystal structure of a material. Knowledge is captured in a Bayesian probability network that relates the probability to find a particular crystal structure at a given composition to structure and energy information at other compositions. We show that this approach is highly efficient in finding the ground states of binary metallic alloys and can be easily generalized to more complex systems.

  8. Explicit parametric solutions of lattice structures with proper generalized decomposition (PGD) - Applications to the design of 3D-printed architectured materials

    NASA Astrophysics Data System (ADS)

    Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro

    2018-01-01

    Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.

  9. An adaptive reconstruction for Lagrangian, direct-forcing, immersed-boundary methods

    NASA Astrophysics Data System (ADS)

    Posa, Antonio; Vanella, Marcos; Balaras, Elias

    2017-12-01

    Lagrangian, direct-forcing, immersed boundary (IB) methods have been receiving increased attention due to their robustness in complex fluid-structure interaction problems. They are very sensitive, however, on the selection of the Lagrangian grid, which is typically used to define a solid or flexible body immersed in a fluid flow. In the present work we propose a cost-efficient solution to this problem without compromising accuracy. Central to our approach is the use of isoparametric mapping to bridge the relative resolution requirements of Lagrangian IB, and Eulerian grids. With this approach, the density of surface Lagrangian markers, which is essential to properly enforce boundary conditions, is adapted dynamically based on the characteristics of the underlying Eulerian grid. The markers are not stored and the Lagrangian data-structure is not modified. The proposed scheme is implemented in the framework of a moving least squares reconstruction formulation, but it can be adapted to any Lagrangian, direct-forcing formulation. The accuracy and robustness of the approach is demonstrated in a variety of test cases of increasing complexity.

  10. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    PubMed

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. On the interrelation of multiplication and division in secondary school children

    PubMed Central

    Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph

    2013-01-01

    Multiplication and division are conceptually inversely related: Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types. PMID:24133476

  12. An investigation of reasoning by analogy in schizophrenia and autism spectrum disorder

    PubMed Central

    Krawczyk, Daniel C.; Kandalaft, Michelle R.; Didehbani, Nyaz; Allen, Tandra T.; McClelland, M. Michelle; Tamminga, Carol A.; Chapman, Sandra B.

    2014-01-01

    Relational reasoning ability relies upon by both cognitive and social factors. We compared analogical reasoning performance in healthy controls (HC) to performance in individuals with Autism Spectrum Disorder (ASD), and individuals with schizophrenia (SZ). The experimental task required participants to find correspondences between drawings of scenes. Participants were asked to infer which item within one scene best matched a relational item within the second scene. We varied relational complexity, presence of distraction, and type of objects in the analogies (living or non-living items). We hypothesized that the cognitive differences present in SZ would reduce relational inferences relative to ASD and HC. We also hypothesized that both SZ and ASD would show lower performance on living item problems relative to HC due to lower social function scores. Overall accuracy was higher for HC relative to SZ, consistent with prior research. Across groups, higher relational complexity reduced analogical responding, as did the presence of non-living items. Separate group analyses revealed that the ASD group was less accurate at making relational inferences in problems that involved mainly non-living items and when distractors were present. The SZ group showed differences in problem type similar to the ASD group. Additionally, we found significant correlations between social cognitive ability and analogical reasoning, particularly for the SZ group. These results indicate that differences in cognitive and social abilities impact the ability to infer analogical correspondences along with numbers of relational elements and types of objects present in the problems. PMID:25191240

  13. The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems

    ERIC Educational Resources Information Center

    Andrews, Paul W.; Thomson, J. Anderson, Jr.

    2009-01-01

    Depression is the primary emotional condition for which help is sought. Depressed people often report persistent rumination, which involves analysis, and complex social problems in their lives. Analysis is often a useful approach for solving complex problems, but it requires slow, sustained processing, so disruption would interfere with problem…

  14. Labile Low-Molecular-Mass Metal Complexes in Mitochondria: Trials and Tribulations of a Burgeoning Field.

    PubMed

    Lindahl, Paul A; Moore, Michael J

    2016-08-02

    Iron, copper, zinc, manganese, cobalt, and molybdenum play important roles in mitochondrial biochemistry, serving to help catalyze reactions in numerous metalloenzymes. These metals are also found in labile "pools" within mitochondria. Although the composition and cellular function of these pools are largely unknown, they are thought to be comprised of nonproteinaceous low-molecular-mass (LMM) metal complexes. Many problems must be solved before these pools can be fully defined, especially problems stemming from the lability of such complexes. This lability arises from inherently weak coordinate bonds between ligands and metals. This is an advantage for catalysis and trafficking, but it makes characterization difficult. The most popular strategy for investigating such pools is to detect them using chelator probes with fluorescent properties that change upon metal coordination. Characterization is limited because of the inevitable destruction of the complexes during their detection. Moreover, probes likely react with more than one type of metal complex, confusing analyses. An alternative approach is to use liquid chromatography (LC) coupled with inductively coupled plasma mass spectrometry (ICP-MS). With help from a previous lab member, the authors recently developed an LC-ICP-MS approach to analyze LMM extracts from yeast and mammalian mitochondria. They detected several metal complexes, including Fe580, Fe1100, Fe1500, Cu5000, Zn1200, Zn1500, Mn1100, Mn2000, Co1200, Co1500, and Mo780 (numbers refer to approximate masses in daltons). Many of these may be used to metalate apo-metalloproteins as they fold inside the organelle. The LC-based approach also has challenges, e.g., in distinguishing artifactual metal complexes from endogenous ones, due to the fact that cells must be disrupted to form extracts before they are passed through chromatography columns prior to analysis. Ultimately, both approaches will be needed to characterize these intriguing complexes and to elucidate their roles in mitochondrial biochemistry.

  15. Two Studies of Complex Nonlinear Systems: Engineered Granular Crystals and Coarse-Graining Optimization Problems

    NASA Astrophysics Data System (ADS)

    Pozharskiy, Dmitry

    In recent years a nonlinear, acoustic metamaterial, named granular crystals, has gained prominence due to its high accessibility, both experimentally and computationally. The observation of a wide range of dynamical phenomena in the system, due to its inherent nonlinearities, has suggested its importance in many engineering applications related to wave propagation. In the first part of this dissertation, we explore the nonlinear dynamics of damped-driven granular crystals. In one case, we consider a highly nonlinear setting, also known as a sonic vacuum, and derive a nonlinear analogue of a linear spectrum, corresponding to resonant periodic propagation and antiresonances. Experimental studies confirm the computational findings and the assimilation of experimental data into a numerical model is demonstrated. In the second case, global bifurcations in a precompressed granular crystal are examined, and their involvement in the appearance of chaotic dynamics is demonstrated. Both results highlight the importance of exploring the nonlinear dynamics, to gain insight into how a granular crystal responds to different external excitations. In the second part, we borrow established ideas from coarse-graining of dynamical systems, and extend them to optimization problems. We combine manifold learning algorithms, such as Diffusion Maps, with stochastic optimization methods, such as Simulated Annealing, and show that we can retrieve an ensemble, of few, important parameters that should be explored in detail. This framework can lead to acceleration of convergence when dealing with complex, high-dimensional optimization, and could potentially be applied to design engineered granular crystals.

  16. Onset of cavity deformation upon subsonic motion of a projectile in a fluid complex plasma.

    PubMed

    Zhukhovitskii, D I; Ivlev, A V; Fortov, V E; Morfill, G E

    2013-06-01

    We study the deformation of a cavity around a large projectile moving with subsonic velocity in the cloud of small dust particles. To solve this problem, we employ the Navier-Stokes equation for a compressible fluid with due regard for friction between dust particles and atoms of neutral gas. The solution shows that due to friction, the pressure of a dust cloud at the surface of a cavity around the projectile can become negative, which entails the emergence of a considerable asymmetry of the cavity, i.e., the cavity deformation. Corresponding threshold velocity is calculated, which is found to decrease with increasing cavity size. Measurement of such velocity makes it possible to estimate the static pressure inside the dust cloud.

  17. The prevention and management of infections due to multidrug resistant organisms in haematology patients

    PubMed Central

    Trubiano, Jason A; Worth, Leon J; Thursky, Karin A; Slavin, Monica A

    2015-01-01

    Infections due to resistant and multidrug resistant (MDR) organisms in haematology patients and haematopoietic stem cell transplant recipients are an increasingly complex problem of global concern. We outline the burden of illness and epidemiology of resistant organisms such as gram-negative pathogens, vancomycin-resistant Enterococcus faecium (VRE), and Clostridium difficile in haematology cohorts. Intervention strategies aimed at reducing the impact of these organisms are reviewed: infection prevention programmes, screening and fluoroquinolone prophylaxis. The role of newer therapies (e.g. linezolid, daptomycin and tigecycline) for treatment of resistant and MDR organisms in haematology populations is evaluated, in addition to the mobilization of older agents (e.g. colistin, pristinamycin and fosfomycin) and the potential benefit of combination regimens. PMID:24341410

  18. Impaired reasoning and problem-solving in individuals with language impairment due to aphasia or language delay

    PubMed Central

    Baldo, Juliana V.; Paulraj, Selvi R.; Curran, Brian C.; Dronkers, Nina F.

    2015-01-01

    The precise nature of the relationship between language and thought is an intriguing and challenging area of inquiry for scientists across many disciplines. In the realm of neuropsychology, research has investigated the inter-dependence of language and thought by testing individuals with compromised language abilities and observing whether performance in other cognitive domains is diminished. One group of such individuals is patients with aphasia who have an impairment in speech and language arising from a brain injury, such as a stroke. Our previous research has shown that the degree of language impairment in these individuals is strongly associated with the degree of impairment on complex reasoning tasks, such as the Wisconsin Card Sorting Task (WCST) and Raven’s Matrices. In the current study, we present new data from a large group of individuals with aphasia that show a dissociation in performance between putatively non-verbal tasks on the Wechsler Adult Intelligence Scale (WAIS) that require differing degrees of reasoning (Picture Completion vs. Picture Arrangement tasks). We also present an update and replication of our previous findings with the WCST showing that individuals with the most profound core language deficits (i.e., impaired comprehension and disordered language output) are particularly impaired on problem-solving tasks. In the second part of the paper, we present findings from a neurologically intact individual known as “Chelsea” who was not exposed to language due to an unaddressed hearing loss that was present since birth. At the age of 32, she was fitted with hearing aids and exposed to spoken and signed language for the first time, but she was only able to acquire a limited language capacity. Chelsea was tested on a series of standardized neuropsychological measures, including reasoning and problem-solving tasks. She was able to perform well on a number of visuospatial tasks but was disproportionately impaired on tasks that required reasoning, such as Raven’s Matrices and the WAIS Picture Arrangement task. Together, these findings suggest that language supports complex reasoning, possibly due to the facilitative role of verbal working memory and inner speech in higher mental processes. PMID:26578991

  19. Understanding Wicked Problems: A Key to Advancing Environmental Health Promotion

    ERIC Educational Resources Information Center

    Kreuter, Marshall W.; De Rosa, Christopher; Howze, Elizabeth H.; Baldwin, Grant T.

    2004-01-01

    Complex environmental health problems--like air and water pollution, hazardous waste sites, and lead poisoning--are in reality a constellation of linked problems embedded in the fabric of the communities in which they occur. These kinds of complex problems have been characterized by some as "wicked problems" wherein stakeholders may have…

  20. Fouling resistance prediction using artificial neural network nonlinear auto-regressive with exogenous input model based on operating conditions and fluid properties correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyanto, Totok R.

    Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO{sub 2} emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating conditions and different crude blends. In this paper, Artificial Neural Networks (ANN) MultiLayer Perceptron (MLP) with input structure using Nonlinear Auto-Regressive with eXogenous (NARX) is utilized to build the fouling resistance model in shell and tube heat exchanger (STHX). The input data of the model aremore » flow rates and temperatures of the streams of the heat exchanger, physical properties of product and crude blend data. This model serves as a predicting tool to optimize operating conditions and preventive maintenance of STHX. The results show that the model can capture the complexity of fouling characteristics in heat exchanger due to thermodynamic conditions and variations in crude oil properties (blends). It was found that the Root Mean Square Error (RMSE) are suitable to capture the nonlinearity and complexity of the STHX fouling resistance during phases of training and validation.« less

  1. Crash energy management on the base of Movable cellular automata method

    NASA Astrophysics Data System (ADS)

    Psakhie, Serguei; Dmitriev, Andrei; Shilko, Evgueni; Tatarintsev, Evgueni; Korostelev, Serguei

    2001-06-01

    One of the main problems of materials science is increasing of structure's viability under dynamic loading. In general, a solution is the management of transformation of the energy of loading to the energy of destroying of the least important parts and details of the structure. It has to be noted that similar problem also exists in materials science, since a majority of modern materials are heterogeneous and have a complex internal structure. To optimize this structure for working under dynamic loading it is necessary to take into account the redistribution of elastic energy including phase transformation, generation and accumulation of micro-damages, etc. As far as real experiments on destroying the complex objects are sufficiently expensive and getting of detailed information is often associates with essential difficulties, the methods of computer modeling are used in solving the similar problems. As a rule, these are the methods of continuum mechanics. Although essential achievements have been obtained on the basis of these methods the continuum approach has several limitations, connected first of all with the possibility of description of generation of damages, formation and development of cracks and mass mixing effects. These problems may be solved on the basis of the Movable Cellular Automata (MCA) method, which has been successfully used for modeling fracture of the different material and structures In the paper behavior and peculiarities of failure of complex structures and materials under dynamic loading are studied on the basis of computer modeling. The results shown that sometimes even small changes of the internal structure leads to the significant increasing of the viability of the complex structures and materials. It is due to the elastic energy flux change over during the dynamical loading. This effect may be explained by the fact that elastic energy fluxes define the current stress concentration. Namely, because the area of inclusions are subjected by the largest displacement and due to less Young modulus of inclusions the loading pulses are transferred towards the other parts of the sample. This leads to "blurring" of the stress concentrators and conservation of wholeness of the structure. In its turn, this leads to essential raising up of threshold value of "injected" energy, i.e. the energy absorbed by the structure before loss of its carrying capacity. Practically, elastic energy "circulates" in the structure until a stress concentrator appears, which power will be sufficient for forming a macro-cracks. The results demonstrate a possibility of managing the fracture process under dynamic loading and raising viability of structures and heterogeneous materials by changing their internal structure, geometry, so by entering the specific inclusions.

  2. Respiratory tract infections in the immunocompromised.

    PubMed

    Godbole, Gauri; Gant, Vanya

    2013-05-01

    Pulmonary infections are particularly common in the immunosuppressed host. This review discusses emerging threats, newer modalities of diagnostic tests and emerging treatment options, and also highlights the increasing problem of antimicrobial resistance. Nosocomial pneumonia is increasingly due to multidrug-resistant Gram-negative organisms in immunosuppressed patients. Viral pneumonias remain a very significant threat, present atypically and carry a high mortality. Aspergillosis remains the most common fungal infection, and infections due to Mucorales are increasing. Multidrug-resistant tuberculosis is on the increase throughout the world. Mixed infections are common and early bronchoscopy with appropriate microbiological tests, including molecular diagnostics, optimise management and reduce mortality. Pulmonary infection remains the most frequent infectious complication in the immunocompromised host. These complex infections are often mixed, have atypical presentations and can be due to multidrug-resistant organisms. Multidisciplinary involvement in specialist centres with appropriate diagnostics, treatment and infection control improves outcome. There is a desperate need for new antimicrobial agents active against Gram-negative pathogens.

  3. Early Warning of Food Security Crises in Urban Areas: The Case of Harare, Zimbabwe, 2007

    NASA Technical Reports Server (NTRS)

    Brown, Molly E.; Funk, Christopher C.

    2008-01-01

    In 2007, the citizens of Harare, Zimbabwe began experiencing an intense food security crisis. The crisis, due to a complex mix of poor government policies, high inflation rates and production decline due to drought, resulted in a massive increase in the number of food insecure people in Harare. The international humanitarian aid response to this crisis was largely successful due to the early agreement among donors and humanitarian aid officials as to the size and nature of the problem. Remote sensing enabled an early and decisive movement of resources greatly assisting the delivery of food aid in a timely manner. Remote sensing data gave a clear and compelling assessment of significant crop production shortfalls, and provided donors of humanitarian assistance a single number around which they could come to agreement. This use of remote sensing data typifies how remote sensing may be used in early warning systems in Africa.

  4. Preparing for Complexity and Wicked Problems through Transformational Learning Approaches

    ERIC Educational Resources Information Center

    Yukawa, Joyce

    2015-01-01

    As the information environment becomes increasingly complex and challenging, Library and Information Studies (LIS) education is called upon to nurture innovative leaders capable of managing complex situations and "wicked problems." While disciplinary expertise remains essential, higher levels of mental complexity and adaptive…

  5. Complexity in Nature and Society: Complexity Management in the Age of Globalization

    NASA Astrophysics Data System (ADS)

    Mainzer, Klaus

    The theory of nonlinear complex systems has become a proven problem-solving approach in the natural sciences from cosmic and quantum systems to cellular organisms and the brain. Even in modern engineering science self-organizing systems are developed to manage complex networks and processes. It is now recognized that many of our ecological, social, economic, and political problems are also of a global, complex, and nonlinear nature. What are the laws of sociodynamics? Is there a socio-engineering of nonlinear problem solving? What can we learn from nonlinear dynamics for complexity management in social, economic, financial and political systems? Is self-organization an acceptable strategy to handle the challenges of complexity in firms, institutions and other organizations? It is a main thesis of the talk that nature and society are basically governed by nonlinear and complex information dynamics. How computational is sociodynamics? What can we hope for social, economic and political problem solving in the age of globalization?.

  6. An effective hybrid self-adapting differential evolution algorithm for the joint replenishment and location-inventory problem in a three-level supply chain.

    PubMed

    Wang, Lin; Qu, Hui; Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem.

  7. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  8. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  9. Word problems: a review of linguistic and numerical factors contributing to their difficulty

    PubMed Central

    Daroczy, Gabriella; Wolska, Magdalena; Meurers, Walt Detmar; Nuerk, Hans-Christoph

    2015-01-01

    Word problems (WPs) belong to the most difficult and complex problem types that pupils encounter during their elementary-level mathematical development. In the classroom setting, they are often viewed as merely arithmetic tasks; however, recent research shows that a number of linguistic verbal components not directly related to arithmetic contribute greatly to their difficulty. In this review, we will distinguish three components of WP difficulty: (i) the linguistic complexity of the problem text itself, (ii) the numerical complexity of the arithmetic problem, and (iii) the relation between the linguistic and numerical complexity of a problem. We will discuss the impact of each of these factors on WP difficulty and motivate the need for a high degree of control in stimuli design for experiments that manipulate WP difficulty for a given age group. PMID:25883575

  10. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  11. Behavioral pattern identification for structural health monitoring in complex systems

    NASA Astrophysics Data System (ADS)

    Gupta, Shalabh

    Estimation of structural damage and quantification of structural integrity are critical for safe and reliable operation of human-engineered complex systems, such as electromechanical, thermofluid, and petrochemical systems. Damage due to fatigue crack is one of the most commonly encountered sources of structural degradation in mechanical systems. Early detection of fatigue damage is essential because the resulting structural degradation could potentially cause catastrophic failures, leading to loss of expensive equipment and human life. Therefore, for reliable operation and enhanced availability, it is necessary to develop capabilities for prognosis and estimation of impending failures, such as the onset of wide-spread fatigue crack damage in mechanical structures. This dissertation presents information-based online sensing of fatigue damage using the analytical tools of symbolic time series analysis ( STSA). Anomaly detection using STSA is a pattern recognition method that has been recently developed based upon a fixed-structure, fixed-order Markov chain. The analysis procedure is built upon the principles of Symbolic Dynamics, Information Theory and Statistical Pattern Recognition. The dissertation demonstrates real-time fatigue damage monitoring based on time series data of ultrasonic signals. Statistical pattern changes are measured using STSA to monitor the evolution of fatigue damage. Real-time anomaly detection is presented as a solution to the forward (analysis) problem and the inverse (synthesis) problem. (1) the forward problem - The primary objective of the forward problem is identification of the statistical changes in the time series data of ultrasonic signals due to gradual evolution of fatigue damage. (2) the inverse problem - The objective of the inverse problem is to infer the anomalies from the observed time series data in real time based on the statistical information generated during the forward problem. A computer-controlled special-purpose fatigue test apparatus, equipped with multiple sensing devices (e.g., ultrasonics and optical microscope) for damage analysis, has been used to experimentally validate the STSA method for early detection of anomalous behavior. The sensor information is integrated with a software module consisting of the STSA algorithm for real-time monitoring of fatigue damage. Experiments have been conducted under different loading conditions on specimens constructed from the ductile aluminium alloy 7075 - T6. The dissertation has also investigated the application of the STSA method for early detection of anomalies in other engineering disciplines. Two primary applications include combustion instability in a generic thermal pulse combustor model and whirling phenomenon in a typical misaligned shaft.

  12. How Cognitive Style and Problem Complexity Affect Preservice Agricultural Education Teachers' Abilities to Solve Problems in Agricultural Mechanics

    ERIC Educational Resources Information Center

    Blackburn, J. Joey; Robinson, J. Shane; Lamm, Alexa J.

    2014-01-01

    The purpose of this experimental study was to determine the effects of cognitive style and problem complexity on Oklahoma State University preservice agriculture teachers' (N = 56) ability to solve problems in small gasoline engines. Time to solution was operationalized as problem solving ability. Kirton's Adaption-Innovation Inventory was…

  13. Multirate control with incomplete information over Profibus-DP network

    NASA Astrophysics Data System (ADS)

    Salt, J.; Casanova, V.; Cuenca, A.; Pizá, R.

    2014-07-01

    When a process field bus-decentralized peripherals (Profibus-DP) network is used in an industrial environment, a deterministic behaviour is usually claimed. However, due to some concerns such as bandwidth limitations, lack of synchronisation among different clocks and existence of time-varying delays, a more complex problem must be faced. This problem implies the transmission of irregular and, even, random sequences of incomplete information. The main consequence of this issue is the appearance of different sampling periods at different network devices. In this paper, this aspect is checked by means of a detailed Profibus-DP timescale study. In addition, in order to deal with the different periods, a delay-dependent dual-rate proportional-integral-derivative control is introduced. Stability for the proposed control system is analysed in terms of linear matrix inequalities.

  14. Design loads and uncertainties for the transverse strength of ships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pittaluga, A.

    1995-12-31

    Rational design of ship structures is becoming a reality, and a reliability based approach for the longitudinal strength assessment of ship hulls is close to implementation. Transverse strength of ships is a step behind, mainly due to the complexity of the collapse modes associated with transverse strength. Nevertheless, some investigations are being made and the importance of an acceptable stochastic model for the environmental demand on the transverse structures is widely recognized. In the paper, the problem of the determination of the sea loads on a transverse section of a ship is discussed. The problem of extrapolating the calculated results,more » which are relevant to the submerged portion of the hull, to areas which are only occasionally wet in extreme conditions is also addressed.« less

  15. [Research progress on hydrological scaling].

    PubMed

    Liu, Jianmei; Pei, Tiefan

    2003-12-01

    With the development of hydrology and the extending effect of mankind on environment, scale issue has become a great challenge to many hydrologists due to the stochasticism and complexity of hydrological phenomena and natural catchments. More and more concern has been given to the scaling issues to gain a large-scale (or small-scale) hydrological characteristic from a certain known catchments, but hasn't been solved successfully. The first part of this paper introduced some concepts about hydrological scale, scale issue and scaling. The key problem is the spatial heterogeneity of catchments and the temporal and spatial variability of hydrological fluxes. Three approaches to scale were put forward in the third part, which were distributed modeling, fractal theory and statistical self similarity analyses. Existing problems and future research directions were proposed in the last part.

  16. A Genetic Algorithm Tool (splicer) for Complex Scheduling Problems and the Space Station Freedom Resupply Problem

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Valenzuela-Rendon, Manuel

    1993-01-01

    The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.

  17. Energy dissipation in a friction-controlled slide of a body excited by random motions of the foundation

    NASA Astrophysics Data System (ADS)

    Berezin, Sergey; Zayats, Oleg

    2018-01-01

    We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.

  18. On the Complexity of Delaying an Adversary’s Project

    DTIC Science & Technology

    2005-01-01

    interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We

  19. Solving the Inverse-Square Problem with Complex Variables

    ERIC Educational Resources Information Center

    Gauthier, N.

    2005-01-01

    The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…

  20. Fundamental mechanisms that influence the estimate of heat transfer to gas turbine blades

    NASA Technical Reports Server (NTRS)

    Graham, R. W.

    1979-01-01

    Estimates of the heat transfer from the gas to stationary (vanes) or rotating blades poses a major uncertainty due to the complexity of the heat transfer processes. The gas flow through these blade rows is three dimensional with complex secondary viscous flow patterns that interact with the endwalls and blade surfaces. In addition, upstream disturbances, stagnation flow, curvature effects, and flow acceleration complicate the thermal transport mechanisms in the boundary layers. Some of these fundamental heat transfer effects are discussed. The chief purpose of the discussion is to acquaint those in the heat transfer community, not directly involved in gas turbines, of the seriousness of the problem and to recommend some basic research that would improve the capability for predicting gas-side heat transfer on turbine blades and vanes.

  1. Towards Improved Finite Element Modelling of the Interaction of Elastic Waves with Complex Defect Geometries

    NASA Astrophysics Data System (ADS)

    Rajagopal, P.; Drozdz, M.; Lowe, M. J. S.

    2009-03-01

    A solution to the problem of improving the finite element (FE) modeling of elastic wave-defect interaction is sought by reconsidering the conventional opinion on meshing strategy. The standard approach using uniform square elements imposes severe limitations in representing complex defect outlines but this is thought to improve when the mesh is made finer. Free meshing algorithms available widely in commercial packages of late can cope with difficult features well but they are thought to cause scattering by the irregular mesh itself. This paper examines whether the benefits offered by free meshing in representing defects better outweigh the inaccuracies due to mesh scattering. If using the standard mesh, the questions whether mesh refinement leads to improved results and whether a practical strategy can be constructed are considered.

  2. Dynamic Modeling as a Cognitive Regulation Scaffold for Developing Complex Problem-Solving Skills in an Educational Massively Multiplayer Online Game Environment

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Ge, Xun; Ifenthaler, Dirk; Law, Victor

    2011-01-01

    Following a design-based research framework, this article reports two empirical studies with an educational MMOG, called "McLarin's Adventures," on facilitating 9th-grade students' complex problem-solving skill acquisition in interdisciplinary STEM education. The article discusses the nature of complex and ill-structured problem solving…

  3. You Need to Know: There Is a Causal Relationship between Structural Knowledge and Control Performance in Complex Problem Solving Tasks

    ERIC Educational Resources Information Center

    Goode, Natassia; Beckmann, Jens F.

    2010-01-01

    This study investigates the relationships between structural knowledge, control performance and fluid intelligence in a complex problem solving (CPS) task. 75 participants received either complete, partial or no information regarding the underlying structure of a complex problem solving task, and controlled the task to reach specific goals.…

  4. Pre-Service Teachers' Free and Structured Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silber, Steven; Cai, Jinfa

    2017-01-01

    This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…

  5. Hydrogen transfer reduction of polyketones catalyzed by iridium complexes: a novel route towards more biocompatible materials.

    PubMed

    Milani, Barbara; Crottib, Corrado; Farnetti, Erica

    2008-09-14

    Transfer hydrogenation from 2-propanol to CO/4-methylstyrene and CO/styrene polyketones was catalyzed by [Ir(diene)(N-N)X] (N-N = nitrogen chelating ligand; X = halogen) in the presence of a basic cocatalyst. The reactions were performed using dioxane as cosolvent, in order to overcome problems due to low polyketone solubility. The polyalcohols were obtained in yields up to 95%, the conversions being markedly dependent on the nature of the ligands coordinated to iridium as well as on the experimental conditions.

  6. An Analysis of Chronic Personnel Shortages in the B-52 Radar Navigator Career Field

    DTIC Science & Technology

    1987-03-01

    Weapon System Trainer - The new simulators for the B-52 located on some of the B-52 bases. Due to the complexity of the simulators, they have a small ...navigators crosstraining to these are lost to the B-52 career field. 21 ASTRA Every year a small number of radar navigators are chosen to attend one yerc at...this case, though, it turned up a small problem initially. The separation rates were obtained from Headquarters SAC (10), but did not include the number

  7. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  8. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  9. Living with faecal incontinence: trying to control the daily life that is out of control.

    PubMed

    Olsson, Frida; Berterö, Carina

    2015-01-01

    To identify and describe the lived experience of persons living with faecal incontinence and show how it affects daily life. Faecal incontinence is a relatively common condition, with a prevalence ranging from 3-24%, not differing between men and women. There is an under-reporting due to patients' reluctance to talk about their symptoms and consult healthcare professionals about their problems, which means that problems related to faecal incontinence are often underestimated. Living with faecal incontinence affects the quality of life negatively and has a negative impact on family situations, social interaction, etc. A qualitative interpretative study based on interviews. In-depth interviews were conducted with five informants, all women, living with faecal incontinence. The interviews were transcribed verbatim and analysed using interpretive phenomenological analysis. The analysis identified four themes: self-affirmation, guilt and shame, limitations in life and personal approach. The themes differ from each other, but are related and have similarities. The results show different aspects of living with faecal incontinence and how they affected daily life. Living with faecal incontinence is a complex problem affecting everyday life in a number of different ways. It is a highly distressing and socially incapacitating problem. Living with faecal incontinence is about trying to control the daily life which is out of control. Living with faecal incontinence cannot be generalised as individuals experience the situation in unique ways. By gaining insight into the experience of living with faecal incontinence, healthcare professionals can deepen their understanding of this complex problem and thereby better address it and provide more individually based care. © 2014 John Wiley & Sons Ltd.

  10. A network flow model for load balancing in circuit-switched multicomputers

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1990-01-01

    In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.

  11. Validation of a finite element method framework for cardiac mechanics applications

    NASA Astrophysics Data System (ADS)

    Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.

    2017-11-01

    Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.

  12. Factors of Problem-Solving Competency in a Virtual Chemistry Environment: The Role of Metacognitive Knowledge about Strategies

    ERIC Educational Resources Information Center

    Scherer, Ronny; Tiemann, Rudiger

    2012-01-01

    The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper,…

  13. Determining the Effects of Cognitive Style, Problem Complexity, and Hypothesis Generation on the Problem Solving Ability of School-Based Agricultural Education Students

    ERIC Educational Resources Information Center

    Blackburn, J. Joey; Robinson, J. Shane

    2016-01-01

    The purpose of this experimental study was to assess the effects of cognitive style, problem complexity, and hypothesis generation on the problem solving ability of school-based agricultural education students. Problem solving ability was defined as time to solution. Kirton's Adaption-Innovation Inventory was employed to assess students' cognitive…

  14. Multichromosomal median and halving problems under different genomic distances

    PubMed Central

    Tannier, Eric; Zheng, Chunfang; Sankoff, David

    2009-01-01

    Background Genome median and genome halving are combinatorial optimization problems that aim at reconstructing ancestral genomes as well as the evolutionary events leading from the ancestor to extant species. Exploring complexity issues is a first step towards devising efficient algorithms. The complexity of the median problem for unichromosomal genomes (permutations) has been settled for both the breakpoint distance and the reversal distance. Although the multichromosomal case has often been assumed to be a simple generalization of the unichromosomal case, it is also a relaxation so that complexity in this context does not follow from existing results, and is open for all distances. Results We settle here the complexity of several genome median and halving problems, including a surprising polynomial result for the breakpoint median and guided halving problems in genomes with circular and linear chromosomes, showing that the multichromosomal problem is actually easier than the unichromosomal problem. Still other variants of these problems are NP-complete, including the DCJ double distance problem, previously mentioned as an open question. We list the remaining open problems. Conclusion This theoretical study clears up a wide swathe of the algorithmical study of genome rearrangements with multiple multichromosomal genomes. PMID:19386099

  15. Submarine harbor navigation using image data

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kramer, Kathleen A.

    2017-01-01

    The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.

  16. Student Cognitive Difficulties and Mental Model Development of Complex Earth and Environmental Systems

    NASA Astrophysics Data System (ADS)

    Sell, K.; Herbert, B.; Schielack, J.

    2004-05-01

    Students organize scientific knowledge and reason about environmental issues through manipulation of mental models. The nature of the environmental sciences, which are focused on the study of complex, dynamic systems, may present cognitive difficulties to students in their development of authentic, accurate mental models of environmental systems. The inquiry project seeks to develop and assess the coupling of information technology (IT)-based learning with physical models in order to foster rich mental model development of environmental systems in geoscience undergraduate students. The manipulation of multiple representations, the development and testing of conceptual models based on available evidence, and exposure to authentic, complex and ill-constrained problems were the components of investigation utilized to reach the learning goals. Upper-level undergraduate students enrolled in an environmental geology course at Texas A&M University participated in this research which served as a pilot study. Data based on rubric evaluations interpreted by principal component analyses suggest students' understanding of the nature of scientific inquiry is limited and the ability to cross scales and link systems proved problematic. Results categorized into content knowledge and cognition processes where reasoning, critical thinking and cognitive load were driving factors behind difficulties in student learning. Student mental model development revealed multiple misconceptions and lacked complexity and completeness to represent the studied systems. Further, the positive learning impacts of the implemented modules favored the physical model over the IT-based learning projects, likely due to cognitive load issues. This study illustrates the need to better understand student difficulties in solving complex problems when using IT, where the appropriate scaffolding can then be implemented to enhance student learning of the earth system sciences.

  17. On designing for quality

    NASA Technical Reports Server (NTRS)

    Vajingortin, L. D.; Roisman, W. P.

    1991-01-01

    The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.

  18. Development of Six Sigma methodology for CNC milling process improvements

    NASA Astrophysics Data System (ADS)

    Ismail, M. N.; Rose, A. N. M.; Mohammed, N. Z.; Rashid, M. F. F. Ab

    2017-10-01

    Quality and productivity have been identified as an important role in any organization, especially for manufacturing sectors to gain more profit that leads to success of a company. This paper reports a work improvement project in Kolej Kemahiran Tinggi MARA Kuantan. It involves problem identification in production of “Khufi” product and proposing an effective framework to improve the current situation effectively. Based on the observation and data collection on the work in progress (WIP) product, the major problem has been identified related to function of the product which is the parts can’t assemble properly due to dimension of the product is out of specification. The six sigma has been used as a methodology to study and improve of the problems identified. Six Sigma is a highly statistical and data driven approach to solving complex business problems. It uses a methodical five phase approach define, measure, analysis, improve and control (DMAIC) to help understand the process and the variables that affect it so that can be optimized the processes. Finally, the root cause and solution for the production of “Khufi” problem has been identified and implemented then the result for this product was successfully followed the specification of fitting.

  19. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  20. Atmospheric and Science Complexity Effects on Surface Bidirectional Reflectance

    NASA Technical Reports Server (NTRS)

    Diner, D. J. (Principal Investigator); Martonchik, J. V.; Sythe, W. D.; Hessom, C.

    1985-01-01

    Among the tools used in passive remote sensing of Earth resources in the visible and near-infrared spectral regions are measurements of spectral signature and bidirectional reflectance functions (BDRFs). Determination of surface properties using these observables is complicated by a number of factors, including: (1) mixing of surface components, such as soil and vegetation, (2) multiple reflections of radiation due to complex geometry, such as in crop canopies, and (3) atmospheric effects. In order to bridge the diversity in these different approaches, there is a need for a fundamental physical understanding of the influence of the various effects and a quantiative measure of their relative importance. In particular, we consider scene complexity effects using the example of reflection by vegetative surfaces. The interaction of sunlight with a crop canopy and interpretation of the spectral and angular dependence of the emergent radiation is basically a multidimensional radiative transfer problem. The complex canopy geometry, underlying soil cover, and presence of diffuse as well as collimated illumination will modify the reflectance characteristics of the canopy relative to those of the individual elements.

  1. Midbond basis functions for weakly bound complexes

    NASA Astrophysics Data System (ADS)

    Shaw, Robert A.; Hill, J. Grant

    2018-06-01

    Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.

  2. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  3. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  4. Data-driven non-linear elasticity: constitutive manifold construction and problem discretization

    NASA Astrophysics Data System (ADS)

    Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco

    2017-11-01

    The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.

  5. Discontinuous Galerkin method for multicomponent chemically reacting flows and combustion

    NASA Astrophysics Data System (ADS)

    Lv, Yu; Ihme, Matthias

    2014-08-01

    This paper presents the development of a discontinuous Galerkin (DG) method for application to chemically reacting flows in subsonic and supersonic regimes under the consideration of variable thermo-viscous-diffusive transport properties, detailed and stiff reaction chemistry, and shock capturing. A hybrid-flux formulation is developed for treatment of the convective fluxes, combining a conservative Riemann-solver and an extended double-flux scheme. A computationally efficient splitting scheme is proposed, in which advection and diffusion operators are solved in the weak form, and the chemically stiff substep is advanced in the strong form using a time-implicit scheme. The discretization of the viscous-diffusive transport terms follows the second form of Bassi and Rebay, and the WENO-based limiter due to Zhong and Shu is extended to multicomponent systems. Boundary conditions are developed for subsonic and supersonic flow conditions, and the algorithm is coupled to thermochemical libraries to account for detailed reaction chemistry and complex transport. The resulting DG method is applied to a series of test cases of increasing physico-chemical complexity. Beginning with one- and two-dimensional multispecies advection and shock-fluid interaction problems, computational efficiency, convergence, and conservation properties are demonstrated. This study is followed by considering a series of detonation and supersonic combustion problems to investigate the convergence-rate and the shock-capturing capability in the presence of one- and multistep reaction chemistry. The DG algorithm is then applied to diffusion-controlled deflagration problems. By examining convergence properties for polynomial order and spatial resolution, and comparing these with second-order finite-volume solutions, it is shown that optimal convergence is achieved and that polynomial refinement provides advantages in better resolving the localized flame structure and complex flow-field features associated with multidimensional and hydrodynamic/thermo-diffusive instabilities in deflagration and detonation systems. Comparisons with standard third- and fifth-order WENO schemes are presented to illustrate the benefit of the DG scheme for application to detonation and multispecies flow/shock-interaction problems.

  6. Liquid-gas phase transitions and C K symmetry in quantum field theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiromichi; Ogilvie, Michael C.; Pangeni, Kamal

    A general field-theoretic framework for the treatment of liquid-gas phase transitions is developed. Starting from a fundamental four-dimensional field theory at nonzero temperature and density, an effective three-dimensional field theory is derived. The effective field theory has a sign problem at finite density. Although finite density explicitly breaks charge conjugation C , there remains a symmetry under C K , where K is complex conjugation. Here, we consider four models: relativistic fermions, nonrelativistic fermions, static fermions and classical particles. The interactions are via an attractive potential due to scalar field exchange and a repulsive potential due to massive vector exchange.more » The field-theoretic representation of the partition function is closely related to the equivalence of the sine-Gordon field theory with a classical gas. The thermodynamic behavior is extracted from C K -symmetric complex saddle points of the effective field theory at tree level. In the cases of nonrelativistic fermions and classical particles, we find complex saddle point solutions but no first-order transitions, and neither model has a ground state at tree level. The relativistic and static fermions show a liquid-gas transition at tree level in the effective field theory. The liquid-gas transition, when it occurs, manifests as a first-order line at low temperature and high density, terminated by a critical end point. The mass matrix controlling the behavior of correlation functions is obtained from fluctuations around the saddle points. Due to the C K symmetry of the models, the eigenvalues of the mass matrix are not always real but can be complex. This then leads to the existence of disorder lines, which mark the boundaries where the eigenvalues go from purely real to complex. The regions where the mass matrix eigenvalues are complex are associated with the critical line. In the case of static fermions, a powerful duality between particles and holes allows for the analytic determination of both the critical line and the disorder lines. Depending on the values of the parameters, either zero, one, or two disorder lines are found. Our numerical results for relativistic fermions give a very similar picture.« less

  7. Liquid-gas phase transitions and C K symmetry in quantum field theories

    DOE PAGES

    Nishimura, Hiromichi; Ogilvie, Michael C.; Pangeni, Kamal

    2017-04-04

    A general field-theoretic framework for the treatment of liquid-gas phase transitions is developed. Starting from a fundamental four-dimensional field theory at nonzero temperature and density, an effective three-dimensional field theory is derived. The effective field theory has a sign problem at finite density. Although finite density explicitly breaks charge conjugation C , there remains a symmetry under C K , where K is complex conjugation. Here, we consider four models: relativistic fermions, nonrelativistic fermions, static fermions and classical particles. The interactions are via an attractive potential due to scalar field exchange and a repulsive potential due to massive vector exchange.more » The field-theoretic representation of the partition function is closely related to the equivalence of the sine-Gordon field theory with a classical gas. The thermodynamic behavior is extracted from C K -symmetric complex saddle points of the effective field theory at tree level. In the cases of nonrelativistic fermions and classical particles, we find complex saddle point solutions but no first-order transitions, and neither model has a ground state at tree level. The relativistic and static fermions show a liquid-gas transition at tree level in the effective field theory. The liquid-gas transition, when it occurs, manifests as a first-order line at low temperature and high density, terminated by a critical end point. The mass matrix controlling the behavior of correlation functions is obtained from fluctuations around the saddle points. Due to the C K symmetry of the models, the eigenvalues of the mass matrix are not always real but can be complex. This then leads to the existence of disorder lines, which mark the boundaries where the eigenvalues go from purely real to complex. The regions where the mass matrix eigenvalues are complex are associated with the critical line. In the case of static fermions, a powerful duality between particles and holes allows for the analytic determination of both the critical line and the disorder lines. Depending on the values of the parameters, either zero, one, or two disorder lines are found. Our numerical results for relativistic fermions give a very similar picture.« less

  8. Prospects of molybdenum and rhenium octahedral cluster complexes as X-ray contrast agents.

    PubMed

    Krasilnikova, Anna A; Shestopalov, Michael A; Brylev, Konstantin A; Kirilova, Irina A; Khripko, Olga P; Zubareva, Kristina E; Khripko, Yuri I; Podorognaya, Valentina T; Shestopalova, Lidiya V; Fedorov, Vladimir E; Mironov, Yuri V

    2015-03-01

    Investigation of new X-ray contrast media for radiography is an important field of science since discovering of X-rays in 1895. Despite the wide diversity of available X-ray contrast media the toxicity, especially nephrotoxicity, is still a big problem to be solved. The octahedral metal-cluster complexes of the general formula [{M6Q8}L6] can be considered as quite promising candidates for the role of new radiocontrast media due to the high local concentration of heavy elements, high tuning ability of ligand environment and low toxicity. To exemplify this, the X-ray computed tomography experiments for the first time were carried out on some octahedral cluster complexes of molybdenum and rhenium. Based on the obtained data it was proposed to investigate the toxicological proprieties of cluster complex Na2H8[{Re6Se8}(P(CH2CH2CONH2)(CH2CH2COO)2)6]. Observed low cytotoxic and acute toxic effects along with rapid renal excretion of the cluster complex evidence its perspective as an X-ray contrast media for radiography. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Complexity of GPs' explanations about mental health problems: development, reliability, and validity of a measure

    PubMed Central

    Cape, John; Morris, Elena; Burd, Mary; Buszewicz, Marta

    2008-01-01

    Background How GPs understand mental health problems determines their treatment choices; however, measures describing GPs' thinking about such problems are not currently available. Aim To develop a measure of the complexity of GP explanations of common mental health problems and to pilot its reliability and validity. Design of study A qualitative development of the measure, followed by inter-rater reliability and validation pilot studies. Setting General practices in North London. Method Vignettes of simulated consultations with patients with mental health problems were videotaped, and an anchored measure of complexity of psychosocial explanation in response to these vignettes was developed. Six GPs, four psychologists, and two lay people viewed the vignettes. Their responses were rated for complexity, both using the anchored measure and independently by two experts in primary care mental health. In a second reliability and revalidation study, responses of 50 GPs to two vignettes were rated for complexity. The GPs also completed a questionnaire to determine their interest and training in mental health, and they completed the Depression Attitudes Questionnaire. Results Inter-rater reliability of the measure of complexity of explanation in both pilot studies was satisfactory (intraclass correlation coefficient = 0.78 and 0.72). The measure correlated with expert opinion as to what constitutes a complex explanation, and the responses of psychologists, GPs, and lay people differed in measured complexity. GPs with higher complexity scores had greater interest, more training in mental health, and more positive attitudes to depression. Conclusion Results suggest that the complexity of GPs' psychosocial explanations about common mental health problems can be reliably and validly assessed by this new standardised measure. PMID:18505616

  10. Clinical Problem Analysis (CPA): A Systematic Approach To Teaching Complex Medical Problem Solving.

    ERIC Educational Resources Information Center

    Custers, Eugene J. F. M.; Robbe, Peter F. De Vries; Stuyt, Paul M. J.

    2000-01-01

    Discusses clinical problem analysis (CPA) in medical education, an approach to solving complex clinical problems. Outlines the five step CPA model and examines the value of CPA's content-independent (methodical) approach. Argues that teaching students to use CPA will enable them to avoid common diagnostic reasoning errors and pitfalls. Compares…

  11. Predicting Development of Mathematical Word Problem Solving Across the Intermediate Grades

    PubMed Central

    Tolar, Tammy D.; Fuchs, Lynn; Cirino, Paul T.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.

    2012-01-01

    This study addressed predictors of the development of word problem solving (WPS) across the intermediate grades. At beginning of 3rd grade, 4 cohorts of students (N = 261) were measured on computation, language, nonverbal reasoning skills, and attentive behavior and were assessed 4 times from beginning of 3rd through end of 5th grade on 2 measures of WPS at low and high levels of complexity. Language skills were related to initial performance at both levels of complexity and did not predict growth at either level. Computational skills had an effect on initial performance in low- but not high-complexity problems and did not predict growth at either level of complexity. Attentive behavior did not predict initial performance but did predict growth in low-complexity, whereas it predicted initial performance but not growth for high-complexity problems. Nonverbal reasoning predicted initial performance and growth for low-complexity WPS, but only growth for high-complexity WPS. This evidence suggests that although mathematical structure is fixed, different cognitive resources may act as limiting factors in WPS development when the WPS context is varied. PMID:23325985

  12. Conceptual and procedural knowledge community college students use when solving a complex science problem

    NASA Astrophysics Data System (ADS)

    Steen-Eibensteiner, Janice Lee

    2006-07-01

    A strong science knowledge base and problem solving skills have always been highly valued for employment in the science industry. Skills currently needed for employment include being able to problem solve (Overtoom, 2000). Academia also recognizes the need for effectively teaching students to apply problem solving skills in clinical settings. This thesis investigates how students solve complex science problems in an academic setting in order to inform the development of problem solving skills for the workplace. Students' use of problem solving skills in the form of learned concepts and procedural knowledge was studied as students completed a problem that might come up in real life. Students were taking a community college sophomore biology course, Human Anatomy & Physiology II. The problem topic was negative feedback inhibition of the thyroid and parathyroid glands. The research questions answered were (1) How well do community college students use a complex of conceptual knowledge when solving a complex science problem? (2) What conceptual knowledge are community college students using correctly, incorrectly, or not using when solving a complex science problem? (3) What problem solving procedural knowledge are community college students using successfully, unsuccessfully, or not using when solving a complex science problem? From the whole class the high academic level participants performed at a mean of 72% correct on chapter test questions which was a low average to fair grade of C-. The middle and low academic participants both failed (F) the test questions (37% and 30% respectively); 29% (9/31) of the students show only a fair performance while 71% (22/31) fail. From the subset sample population of 2 students each from the high, middle, and low academic levels selected from the whole class 35% (8/23) of the concepts were used effectively, 22% (5/23) marginally, and 43% (10/23) poorly. Only 1 concept was used incorrectly by 3/6 of the students and identified as a misconception. One of 21 (5%) problem-solving pathway characteristics was used effectively, 7 (33%) marginally, and 13 (62%) poorly. There were very few (0 to 4) problem-solving pathway characteristics used unsuccessfully most were simply not used.

  13. A novel robust speed controller scheme for PMBLDC motor.

    PubMed

    Thirusakthimurugan, P; Dananjayan, P

    2007-10-01

    The design of speed and position controllers for permanent magnet brushless DC motor (PMBLDC) drive remains as an open problem in the field of motor drives. A precise speed control of PMBLDC motor is complex due to nonlinear coupling between winding currents and rotor speed. In addition, the nonlinearity present in the developed torque due to magnetic saturation of the rotor further complicates this issue. This paper presents a novel control scheme to the conventional PMBLDC motor drive, which aims at improving the robustness by complete decoupling of the design besides minimizing the mutual influence among the speed and current control loops. The interesting feature of this robust control scheme is its suitability for both static and dynamic aspects. The effectiveness of the proposed robust speed control scheme is verified through simulations.

  14. Meshfree and efficient modeling of swimming cells

    NASA Astrophysics Data System (ADS)

    Gallagher, Meurig T.; Smith, David J.

    2018-05-01

    Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.

  15. [The first and foremost tasks of the medical service].

    PubMed

    Chizh, I M

    1997-07-01

    Now in connection with common situation in Russian Federation the problem of reinforcements of army and fleet by healthy personnel, scare of a call-up quota and its poor quality are the main problems of the Armed Forces at the state level. The uniform complex program of medico-social maintenance of the citizens during preparation for military service is necessary. The modern situation is difficult due to many infectious diseases, so the role and the place of military-medical service grows. In last years structure of quota, served by the military doctors, and number of other parameters have greatly changed, that require revision of some priorities. A problem of reinforcements of the Armed Forces by medical service officers remains actual, for decision of which a full-bodied admission on military medical faculty is required, as well as admission of the officers under contract and calling-up of reserve officers. In article the main lessons, received by the medical service during combat actions in Republic of Chechnya are also formulated.

  16. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  17. Common diagnoses and treatments in professional voice users.

    PubMed

    Franco, Ramon A; Andrus, Jennifer G

    2007-10-01

    Common problems among all patients seen by the laryngologist are also common among professional voice users. These include laryngopharyngeal reflux, muscle tension dysphonia, fibrovascular vocal fold lesions (eg, nodules and polyps), cysts, vocal fold scarring, changes in vocal fold mobility, and age-related changes. Microvascular lesions and their associated sequelae of vocal fold hemorrhage and laryngitis due to voice overuse are more common among professional voice users. Much more common among professional voice users is the negative impact that voice problems have on their ability to work, on their overall sense of well-being, and sometimes on their very sense of self. This article reviews the diagnosis and treatment options for these and other problems among professional voice users, describing the relevant roles of medical treatment, voice therapy, and surgery. The common scenario of multiple concomitant entities contributing to a symptom complex is underscored. Emphasis is placed on gaining insight into the "whole" patient so that individualized management plans can be developed. Videos of select diagnoses accompany this content online.

  18. Exploiting Lipid Permutation Symmetry to Compute Membrane Remodeling Free Energies.

    PubMed

    Bubnis, Greg; Risselada, Herre Jelger; Grubmüller, Helmut

    2016-10-28

    A complete physical description of membrane remodeling processes, such as fusion or fission, requires knowledge of the underlying free energy landscapes, particularly in barrier regions involving collective shape changes, topological transitions, and high curvature, where Canham-Helfrich (CH) continuum descriptions may fail. To calculate these free energies using atomistic simulations, one must address not only the sampling problem due to high free energy barriers, but also an orthogonal sampling problem of combinatorial complexity stemming from the permutation symmetry of identical lipids. Here, we solve the combinatorial problem with a permutation reduction scheme to map a structural ensemble into a compact, nondegenerate subregion of configuration space, thereby permitting straightforward free energy calculations via umbrella sampling. We applied this approach, using a coarse-grained lipid model, to test the CH description of bending and found sharp increases in the bending modulus for curvature radii below 10 nm. These deviations suggest that an anharmonic bending term may be required for CH models to give quantitative energetics of highly curved states.

  19. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    NASA Astrophysics Data System (ADS)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  20. Cognitive process modelling of controllers in en route air traffic control.

    PubMed

    Inoue, Satoru; Furuta, Kazuo; Nakata, Keiichi; Kanno, Taro; Aoyama, Hisae; Brown, Mark

    2012-01-01

    In recent years, various efforts have been made in air traffic control (ATC) to maintain traffic safety and efficiency in the face of increasing air traffic demands. ATC is a complex process that depends to a large degree on human capabilities, and so understanding how controllers carry out their tasks is an important issue in the design and development of ATC systems. In particular, the human factor is considered to be a serious problem in ATC safety and has been identified as a causal factor in both major and minor incidents. There is, therefore, a need to analyse the mechanisms by which errors occur due to complex factors and to develop systems that can deal with these errors. From the cognitive process perspective, it is essential that system developers have an understanding of the more complex working processes that involve the cooperative work of multiple controllers. Distributed cognition is a methodological framework for analysing cognitive processes that span multiple actors mediated by technology. In this research, we attempt to analyse and model interactions that take place in en route ATC systems based on distributed cognition. We examine the functional problems in an ATC system from a human factors perspective, and conclude by identifying certain measures by which to address these problems. This research focuses on the analysis of air traffic controllers' tasks for en route ATC and modelling controllers' cognitive processes. This research focuses on an experimental study to gain a better understanding of controllers' cognitive processes in air traffic control. We conducted ethnographic observations and then analysed the data to develop a model of controllers' cognitive process. This analysis revealed that strategic routines are applicable to decision making.

  1. Towards communication-efficient quantum oblivious key distribution

    NASA Astrophysics Data System (ADS)

    Panduranga Rao, M. V.; Jakobi, M.

    2013-01-01

    Symmetrically private information retrieval, a fundamental problem in the field of secure multiparty computation, is defined as follows: A database D of N bits held by Bob is queried by a user Alice who is interested in the bit Db in such a way that (1) Alice learns Db and only Db and (2) Bob does not learn anything about Alice's choice b. While solutions to this problem in the classical domain rely largely on unproven computational complexity theoretic assumptions, it is also known that perfect solutions that guarantee both database and user privacy are impossible in the quantum domain. Jakobi [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.83.022301 83, 022301 (2011)] proposed a protocol for oblivious transfer using well-known quantum key device (QKD) techniques to establish an oblivious key to solve this problem. Their solution provided a good degree of database and user privacy (using physical principles like the impossibility of perfectly distinguishing nonorthogonal quantum states and the impossibility of superluminal communication) while being loss-resistant and implementable with commercial QKD devices (due to the use of the Scarani-Acin-Ribordy-Gisin 2004 protocol). However, their quantum oblivious key distribution (QOKD) protocol requires a communication complexity of O(NlogN). Since modern databases can be extremely large, it is important to reduce this communication as much as possible. In this paper, we first suggest a modification of their protocol wherein the number of qubits that need to be exchanged is reduced to O(N). A subsequent generalization reduces the quantum communication complexity even further in such a way that only a few hundred qubits are needed to be transferred even for very large databases.

  2. Multi-period project portfolio selection under risk considerations and stochastic income

    NASA Astrophysics Data System (ADS)

    Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood

    2018-02-01

    This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.

  3. [Evaluation and treatment of sleep problems in children diagnosed with attention deficit hyperactivity disorder: an update of the evidence].

    PubMed

    Chamorro, M; Lara, J P; Insa, I; Espadas, M; Alda-Diez, J A

    2017-05-01

    Attention deficit hyperactivity disorder (ADHD) affects approximately 5% of all children and adolescents, and these patients frequently suffer from sleep problems. The association between sleep disorders and ADHD, however, is multifaceted and complex. To explore the relationship between sleep disorders and ADHD. Sleep problems in children with ADHD include altered sleep and specific disorders per se or that may be due to comorbid psychiatric disorders or to the stimulants they receive as treatment for their ADHD. Today, an evaluation of the sleep conditions in children with ADHD is recommended before starting pharmacological treatment. The first step in managing their sleep problems is good sleep hygiene and cognitive-behavioural psychotherapy. Another option is to consider modifying the dosage and formulation of the stimulants. Atomoxetine and melatonin are therapeutic alternatives for children with ADHD and more severe sleep problems. Specific treatments exist for respiratory and movement disorders during sleep. It is important to evaluate sleep in children who present symptoms suggestive of ADHD, since problems during sleep can play a causal role or exacerbate the clinical features of ADHD. Correct evaluation and treatment of sleep disorders increase the family's and the child's quality of life and can lessen the severity of the symptoms of ADHD.

  4. Complex Problem Solving in a Workplace Setting.

    ERIC Educational Resources Information Center

    Middleton, Howard

    2002-01-01

    Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)

  5. Managing resource capacity using hybrid simulation

    NASA Astrophysics Data System (ADS)

    Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat

    2014-12-01

    Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.

  6. Experimental phase synchronization detection in non-phase coherent chaotic systems by using the discrete complex wavelet approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Maria Teodora; Follmann, Rosangela; Domingues, Margarete O.; Macau, Elbert E. N.; Kiss, István Z.

    2017-08-01

    Phase synchronization may emerge from mutually interacting non-linear oscillators, even under weak coupling, when phase differences are bounded, while amplitudes remain uncorrelated. However, the detection of this phenomenon can be a challenging problem to tackle. In this work, we apply the Discrete Complex Wavelet Approach (DCWA) for phase assignment, considering signals from coupled chaotic systems and experimental data. The DCWA is based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is a discrete transformation. Due to its multi-scale properties in the context of phase characterization, it is possible to obtain very good results from scalar time series, even with non-phase-coherent chaotic systems without state space reconstruction or pre-processing. The method correctly predicts the phase synchronization for a chemical experiment with three locally coupled, non-phase-coherent chaotic processes. The impact of different time-scales is demonstrated on the synchronization process that outlines the advantages of DCWA for analysis of experimental data.

  7. Insight and analysis problem solving in microbes to machines.

    PubMed

    Clark, Kevin B

    2015-11-01

    A key feature for obtaining solutions to difficult problems, insight is oftentimes vaguely regarded as a special discontinuous intellectual process and/or a cognitive restructuring of problem representation or goal approach. However, this nearly century-old state of art devised by the Gestalt tradition to explain the non-analytical or non-trial-and-error, goal-seeking aptitude of primate mentality tends to neglect problem-solving capabilities of lower animal phyla, Kingdoms other than Animalia, and advancing smart computational technologies built from biological, artificial, and composite media. Attempting to provide an inclusive, precise definition of insight, two major criteria of insight, discontinuous processing and problem restructuring, are here reframed using terminology and statistical mechanical properties of computational complexity classes. Discontinuous processing becomes abrupt state transitions in algorithmic/heuristic outcomes or in types of algorithms/heuristics executed by agents using classical and/or quantum computational models. And problem restructuring becomes combinatorial reorganization of resources, problem-type substitution, and/or exchange of computational models. With insight bounded by computational complexity, humans, ciliated protozoa, and complex technological networks, for example, show insight when restructuring time requirements, combinatorial complexity, and problem type to solve polynomial and nondeterministic polynomial decision problems. Similar effects are expected from other problem types, supporting the idea that insight might be an epiphenomenon of analytical problem solving and consequently a larger information processing framework. Thus, this computational complexity definition of insight improves the power, external and internal validity, and reliability of operational parameters with which to classify, investigate, and produce the phenomenon for computational agents ranging from microbes to man-made devices. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Analyzing the impact of social factors on homelessness: a Fuzzy Cognitive Map approach

    PubMed Central

    2013-01-01

    Background The forces which affect homelessness are complex and often interactive in nature. Social forces such as addictions, family breakdown, and mental illness are compounded by structural forces such as lack of available low-cost housing, poor economic conditions, and insufficient mental health services. Together these factors impact levels of homelessness through their dynamic relations. Historic models, which are static in nature, have only been marginally successful in capturing these relationships. Methods Fuzzy Logic (FL) and fuzzy cognitive maps (FCMs) are particularly suited to the modeling of complex social problems, such as homelessness, due to their inherent ability to model intricate, interactive systems often described in vague conceptual terms and then organize them into a specific, concrete form (i.e., the FCM) which can be readily understood by social scientists and others. Using FL we converted information, taken from recently published, peer reviewed articles, for a select group of factors related to homelessness and then calculated the strength of influence (weights) for pairs of factors. We then used these weighted relationships in a FCM to test the effects of increasing or decreasing individual or groups of factors. Results of these trials were explainable according to current empirical knowledge related to homelessness. Results Prior graphic maps of homelessness have been of limited use due to the dynamic nature of the concepts related to homelessness. The FCM technique captures greater degrees of dynamism and complexity than static models, allowing relevant concepts to be manipulated and interacted. This, in turn, allows for a much more realistic picture of homelessness. Through network analysis of the FCM we determined that Education exerts the greatest force in the model and hence impacts the dynamism and complexity of a social problem such as homelessness. Conclusions The FCM built to model the complex social system of homelessness reasonably represented reality for the sample scenarios created. This confirmed that the model worked and that a search of peer reviewed, academic literature is a reasonable foundation upon which to build the model. Further, it was determined that the direction and strengths of relationships between concepts included in this map are a reasonable approximation of their action in reality. However, dynamic models are not without their limitations and must be acknowledged as inherently exploratory. PMID:23971944

  9. Translating concepts of complexity to the field of ergonomics.

    PubMed

    Walker, Guy H; Stanton, Neville A; Salmon, Paul M; Jenkins, Daniel P; Rafferty, Laura

    2010-10-01

    Since 1958 more than 80 journal papers from the mainstream ergonomics literature have used either the words 'complex' or 'complexity' in their titles. Of those, more than 90% have been published in only the past 20 years. This observation communicates something interesting about the way in which contemporary ergonomics problems are being understood. The study of complexity itself derives from non-linear mathematics but many of its core concepts have found analogies in numerous non-mathematical domains. Set against this cross-disciplinary background, the current paper aims to provide a similar initial mapping to the field of ergonomics. In it, the ergonomics problem space, complexity metrics and powerful concepts such as emergence raise complexity to the status of an important contingency factor in achieving a match between ergonomics problems and ergonomics methods. The concept of relative predictive efficiency is used to illustrate how this match could be achieved in practice. What is clear overall is that a major source of, and solution to, complexity are the humans in systems. Understanding complexity on its own terms offers the potential to leverage disproportionate effects from ergonomics interventions and to tighten up the often loose usage of the term in the titles of ergonomics papers. STATEMENT OF RELEVANCE: This paper reviews and discusses concepts from the study of complexity and maps them to ergonomics problems and methods. It concludes that humans are a major source of and solution to complexity in systems and that complexity is a powerful contingency factor, which should be considered to ensure that ergonomics approaches match the true nature of ergonomics problems.

  10. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  11. Thermodynamics of complex structures formed between single-stranded DNA oligomers and the KH domains of the far upstream element binding protein

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Kaushik; Sinha, Sudipta Kumar; Bandyopadhyay, Sanjoy, E-mail: sanjoy@chem.iitkgp.ernet.in

    The noncovalent interaction between protein and DNA is responsible for regulating the genetic activities in living organisms. The most critical issue in this problem is to understand the underlying driving force for the formation and stability of the complex. To address this issue, we have performed atomistic molecular dynamics simulations of two DNA binding K homology (KH) domains (KH3 and KH4) of the far upstream element binding protein (FBP) complexed with two single-stranded DNA (ss-DNA) oligomers in aqueous media. Attempts have been made to calculate the individual components of the net entropy change for the complexation process by adopting suitablemore » statistical mechanical approaches. Our calculations reveal that translational, rotational, and configurational entropy changes of the protein and the DNA components have unfavourable contributions for this protein-DNA association process and such entropy lost is compensated by the entropy gained due to the release of hydration layer water molecules. The free energy change corresponding to the association process has also been calculated using the Free Energy Perturbation (FEP) method. The free energy gain associated with the KH4–DNA complex formation has been found to be noticeably higher than that involving the formation of the KH3–DNA complex.« less

  12. High field hyperpolarization-EXSY experiment for fast determination of dissociation rates in SABRE complexes

    NASA Astrophysics Data System (ADS)

    Hermkens, Niels K. J.; Feiters, Martin C.; Rutjes, Floris P. J. T.; Wijmenga, Sybren S.; Tessari, Marco

    2017-03-01

    SABRE (Signal Amplification By Reversible Exchange) is a nuclear spin hyperpolarization technique based on the reversible concurrent binding of small molecules and para-hydrogen (p-H2) to an iridium metal complex in solution. At low magnetic field, spontaneous conversion of p-H2 spin order to enhanced longitudinal magnetization of the nuclear spins of the other ligands occurs. Subsequent complex dissociation results in hyperpolarized substrate molecules in solution. The lifetime of this complex plays a crucial role in attained SABRE NMR signal enhancements. Depending on the ligands, vastly different dissociation rates have been previously measured using EXSY or selective inversion experiments. However, both these approaches are generally time-consuming due to the long recycle delays (up to 2 min) necessary to reach thermal equilibrium for the nuclear spins of interest. In the cases of dilute solutions, signal averaging aggravates the problem, further extending the experimental time. Here, a new approach is proposed based on coherent hyperpolarization transfer to substrate protons in asymmetric complexes at high magnetic field. We have previously shown that such asymmetric complexes are important for application of SABRE to dilute substrates. Our results demonstrate that a series of high sensitivity EXSY spectra can be collected in a short experimental time thanks to the NMR signal enhancement and much shorter recycle delay.

  13. Practical Implementation of Semi-Automated As-Built Bim Creation for Complex Indoor Environments

    NASA Astrophysics Data System (ADS)

    Yoon, S.; Jung, J.; Heo, J.

    2015-05-01

    In recent days, for efficient management and operation of existing buildings, the importance of as-built BIM is emphasized in AEC/FM domain. However, fully automated as-built BIM creation is a tough issue since newly-constructed buildings are becoming more complex. To manage this problem, our research group has developed a semi-automated approach, focusing on productive 3D as-built BIM creation for complex indoor environments. In order to test its feasibility for a variety of complex indoor environments, we applied the developed approach to model the `Charlotte stairs' in Lotte World Mall, Korea. The approach includes 4 main phases: data acquisition, data pre-processing, geometric drawing, and as-built BIM creation. In the data acquisition phase, due to its complex structure, we moved the scanner location several times to obtain the entire point clouds of the test site. After which, data pre-processing phase entailing point-cloud registration, noise removal, and coordinate transformation was followed. The 3D geometric drawing was created using the RANSAC-based plane detection and boundary tracing methods. Finally, in order to create a semantically-rich BIM, the geometric drawing was imported into the commercial BIM software. The final as-built BIM confirmed that the feasibility of the proposed approach in the complex indoor environment.

  14. Stability Analysis of Algebraic Reconstruction for Immersed Boundary Methods with Application in Flow and Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, M.; Battiato, I.

    2017-12-01

    Flow and reactive transport problems in porous media often involve complex geometries with stationary or evolving boundaries due to absorption and dissolution processes. Grid based methods (e.g. finite volume, finite element, etc.) are a vital tool for studying these problems. Yet, implementing these methods requires one to answer a very first question of what type of grid is to be used. Among different possible answers, Cartesian grids are one of the most attractive options as they possess simple discretization stencil and are usually straightforward to generate at roughly no computational cost. The Immersed Boundary Method, a Cartesian based methodology, maintains most of the useful features of the structured grids while exhibiting a high-level resilience in dealing with complex geometries. These features make it increasingly more attractive to model transport in evolving porous media as the cost of grid generation reduces greatly. Yet, stability issues and severe time-step restriction due to explicit-time implementation combined with limited studies on the implementation of Neumann (constant flux) and linear and non-linear Robin (e.g. reaction) boundary conditions (BCs) have significantly limited the applicability of IBMs to transport in porous media. We have developed an implicit IBM capable of handling all types of BCs and addressed some numerical issues, including unconditional stability criteria, compactness and reduction of spurious oscillations near the immersed boundary. We tested the method for several transport and flow scenarios, including dissolution processes in porous media, and demonstrate its capabilities. Successful validation against both experimental and numerical data has been carried out.

  15. Schizophrenia, narrative, and neurocognition: The utility of life-stories in understanding social problem-solving skills.

    PubMed

    Moe, Aubrey M; Breitborde, Nicholas J K; Bourassa, Kyle J; Gallagher, Colin J; Shakeel, Mohammed K; Docherty, Nancy M

    2018-06-01

    Schizophrenia researchers have focused on phenomenological aspects of the disorder to better understand its underlying nature. In particular, development of personal narratives-that is, the complexity with which people form, organize, and articulate their "life stories"-has recently been investigated in individuals with schizophrenia. However, less is known about how aspects of narrative relate to indicators of neurocognitive and social functioning. The objective of the present study was to investigate the association of linguistic complexity of life-story narratives to measures of cognitive and social problem-solving abilities among people with schizophrenia. Thirty-two individuals with a diagnosis of schizophrenia completed a research battery consisting of clinical interviews, a life-story narrative, neurocognitive testing, and a measure assessing multiple aspects of social problem solving. Narrative interviews were assessed for linguistic complexity using computerized technology. The results indicate differential relationships of linguistic complexity and neurocognition to domains of social problem-solving skills. More specifically, although neurocognition predicted how well one could both describe and enact a solution to a social problem, linguistic complexity alone was associated with accurately recognizing that a social problem had occurred. In addition, linguistic complexity appears to be a cognitive factor that is discernible from other broader measures of neurocognition. Linguistic complexity may be more relevant in understanding earlier steps of the social problem-solving process than more traditional, broad measures of cognition, and thus is relevant in conceptualizing treatment targets. These findings also support the relevance of developing narrative-focused psychotherapies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  17. Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

    NASA Astrophysics Data System (ADS)

    Rimland, Jeff; Ballora, Mark

    2014-05-01

    The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

  18. Assessment of changing interdependencies between human electroencephalograms using nonlinear methods

    NASA Astrophysics Data System (ADS)

    Pereda, E.; Rial, R.; Gamundi, A.; González, J.

    2001-01-01

    We investigate the problems that might arise when two recently developed methods for detecting interdependencies between time series using state space embedding are applied to signals of different complexity. With this aim, these methods were used to assess the interdependencies between two electroencephalographic channels from 10 adult human subjects during different vigilance states. The significance and nature of the measured interdependencies were checked by comparing the results of the original data with those of different types of surrogates. We found that even with proper reconstructions of the dynamics of the time series, both methods may give wrong statistical evidence of decreasing interdependencies during deep sleep due to changes in the complexity of each individual channel. The main factor responsible for this result was the use of an insufficient number of neighbors in the calculations. Once this problem was surmounted, both methods showed the existence of a significant relationship between the channels which was mostly of linear type and increased from awake to slow wave sleep. We conclude that the significance of the qualitative results provided for both methods must be carefully tested before drawing any conclusion about the implications of such results.

  19. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  20. Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective.

    PubMed

    Chen, Chen; Tong, Hanghang; Xie, Lei; Ying, Lei; He, Qing

    2017-08-01

    The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model-multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm Fascinate that can reveal unobserved dependencies with linear complexity. Moreover, we derive Fascinate-ZERO, an online variant of Fascinate that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.

  1. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  2. APFiLoc: An Infrastructure-Free Indoor Localization Method Fusing Smartphone Inertial Sensors, Landmarks and Map Information

    PubMed Central

    Shang, Jianga; Gu, Fuqiang; Hu, Xuke; Kealy, Allison

    2015-01-01

    The utility and adoption of indoor localization applications have been limited due to the complex nature of the physical environment combined with an increasing requirement for more robust localization performance. Existing solutions to this problem are either too expensive or too dependent on infrastructure such as Wi-Fi access points. To address this problem, we propose APFiLoc—a low cost, smartphone-based framework for indoor localization. The key idea behind this framework is to obtain landmarks within the environment and to use the augmented particle filter to fuse them with measurements from smartphone sensors and map information. A clustering method based on distance constraints is developed to detect organic landmarks in an unsupervised way, and the least square support vector machine is used to classify seed landmarks. A series of real-world experiments were conducted in complex environments including multiple floors and the results show APFiLoc can achieve 80% accuracy (phone in the hand) and around 70% accuracy (phone in the pocket) of the error less than 2 m error without the assistance of infrastructure like Wi-Fi access points. PMID:26516858

  3. Reducing assembly complexity of microbial genomes with single-molecule sequencing.

    PubMed

    Koren, Sergey; Harhay, Gregory P; Smith, Timothy P L; Bono, James L; Harhay, Dayna M; Mcvey, Scott D; Radune, Diana; Bergman, Nicholas H; Phillippy, Adam M

    2013-01-01

    The short reads output by first- and second-generation DNA sequencing instruments cannot completely reconstruct microbial chromosomes. Therefore, most genomes have been left unfinished due to the significant resources required to manually close gaps in draft assemblies. Third-generation, single-molecule sequencing addresses this problem by greatly increasing sequencing read length, which simplifies the assembly problem. To measure the benefit of single-molecule sequencing on microbial genome assembly, we sequenced and assembled the genomes of six bacteria and analyzed the repeat complexity of 2,267 complete bacteria and archaea. Our results indicate that the majority of known bacterial and archaeal genomes can be assembled without gaps, at finished-grade quality, using a single PacBio RS sequencing library. These single-library assemblies are also more accurate than typical short-read assemblies and hybrid assemblies of short and long reads. Automated assembly of long, single-molecule sequencing data reduces the cost of microbial finishing to $1,000 for most genomes, and future advances in this technology are expected to drive the cost lower. This is expected to increase the number of completed genomes, improve the quality of microbial genome databases, and enable high-fidelity, population-scale studies of pan-genomes and chromosomal organization.

  4. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training

    PubMed Central

    Boonen, Anton J. H.; de Koning, Björn B.; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME. PMID:26925012

  5. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training.

    PubMed

    Boonen, Anton J H; de Koning, Björn B; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME.

  6. The Animal Model of Spinal Cord Injury as an Experimental Pain Model

    PubMed Central

    Nakae, Aya; Nakai, Kunihiro; Yano, Kenji; Hosokawa, Ko; Shibata, Masahiko; Mashimo, Takashi

    2011-01-01

    Pain, which remains largely unsolved, is one of the most crucial problems for spinal cord injury patients. Due to sensory problems, as well as motor dysfunctions, spinal cord injury research has proven to be complex and difficult. Furthermore, many types of pain are associated with spinal cord injury, such as neuropathic, visceral, and musculoskeletal pain. Many animal models of spinal cord injury exist to emulate clinical situations, which could help to determine common mechanisms of pathology. However, results can be easily misunderstood and falsely interpreted. Therefore, it is important to fully understand the symptoms of human spinal cord injury, as well as the various spinal cord injury models and the possible pathologies. The present paper summarizes results from animal models of spinal cord injury, as well as the most effective use of these models. PMID:21436995

  7. Prediction on carbon dioxide emissions based on fuzzy rules

    NASA Astrophysics Data System (ADS)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  8. Heat transfer evaluation in a plasma core reactor

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Smith, T. M.; Stoenescu, M. L.

    1976-01-01

    Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.

  9. Application of zinc isotope tracer technology in tracing soil heavy metal pollution

    NASA Astrophysics Data System (ADS)

    Norbu, Namkha; Wang, Shuguang; Xu, Yan; Yang, Jianqiang; Liu, Qiang

    2017-08-01

    Recent years the soil heavy metal pollution has become increasingly serious, especially the zinc pollution. Due to the complexity of this problem, in order to prevent and treat the soil pollution, it's crucial to accurately and quickly find out the pollution sources and control them. With the development of stable isotope tracer technology, it's able to determine the composition of zinc isotope. Based on the theory of zinc isotope tracer technique, and by means of doing some latest domestic and overseas literature research about the zinc isotope multi-receiving cups of inductively coupled plasma mass spectrometer (MC-ICP-MS) testing technology, this paper summarized the latest research results about the pollution tracer of zinc isotope, and according to the deficiencies and existing problems of previous research, made outlooks of zinc isotope fractionation mechanism, repository establishment and tracer multiple solutions.

  10. Establishing a head and neck unit in a developing country.

    PubMed

    Aswani, J; Baidoo, K; Otiti, J

    2012-06-01

    Head and neck cancers pose an especially serious problem in developing countries due to late presentation requiring complex surgical intervention. These countries are faced with many challenges, ranging from insufficient health care staff to problems with peri-operative requirements, diagnostic facilities, chemoradiation services and research funding.These challenges can be addressed through the training of head and neck surgeons and support personnel, the improvement of cancer awareness in local communities, and the establishment of dedicated head and neck institutes which focus on the special needs of head and neck cancer patients.All these changes can best be achieved through collaborative efforts with external partners. The Karl Storz Fellowship in Advanced Head and Neck Cancer, enabling training at the University of Cape Town, South Africa, has served as a springboard towards establishing head and neck services in developing sub-Saharan African countries.

  11. Whole-genome alignment.

    PubMed

    Dewey, Colin N

    2012-01-01

    Whole-genome alignment (WGA) is the prediction of evolutionary relationships at the nucleotide level between two or more genomes. It combines aspects of both colinear sequence alignment and gene orthology prediction, and is typically more challenging to address than either of these tasks due to the size and complexity of whole genomes. Despite the difficulty of this problem, numerous methods have been developed for its solution because WGAs are valuable for genome-wide analyses, such as phylogenetic inference, genome annotation, and function prediction. In this chapter, we discuss the meaning and significance of WGA and present an overview of the methods that address it. We also examine the problem of evaluating whole-genome aligners and offer a set of methodological challenges that need to be tackled in order to make the most effective use of our rapidly growing databases of whole genomes.

  12. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  13. Level of Satisfaction of Older Persons with Their General Practitioner and Practice: Role of Complexity of Health Problems

    PubMed Central

    Poot, Antonius J.; den Elzen, Wendy P. J.; Blom, Jeanet W.; Gussekloo, Jacobijn

    2014-01-01

    Background Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. Methods and Findings This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4–2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1–1.8; p = 0.021). Conclusion In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions. PMID:24710557

  14. Level of satisfaction of older persons with their general practitioner and practice: role of complexity of health problems.

    PubMed

    Poot, Antonius J; den Elzen, Wendy P J; Blom, Jeanet W; Gussekloo, Jacobijn

    2014-01-01

    Satisfaction is widely used to evaluate and direct delivery of medical care; a complicated relationship exists between patient satisfaction, morbidity and age. This study investigates the relationships between complexity of health problems and level of patient satisfaction of older persons with their general practitioner (GP) and practice. This study is embedded in the ISCOPE (Integrated Systematic Care for Older Persons) study. Enlisted patients aged ≥75 years from 59 practices received a written questionnaire to screen for complex health problems (somatic, functional, psychological and social). For 2664 randomly chosen respondents (median age 82 years; 68% female) information was collected on level of satisfaction (satisfied, neutral, dissatisfied) with their GP and general practice, and demographic and clinical characteristics including complexity of health problems. Of all participants, 4% was dissatisfied with their GP care, 59% neutral and 37% satisfied. Between these three categories no differences were observed in age, gender, country of birth or education level. The percentage of participants dissatisfied with their GP care increased from 0.4% in those with 0 problem domains to 8% in those with 4 domains, i.e. having complex health problems (p<0.001). Per additional health domain with problems, the risk of being dissatisfied increased 1.7 times (95% CI 1.4-2.14; p<0.001). This was independent of age, gender, and demographic and clinical parameters (adjusted OR 1.4, 95% CI 1.1-1.8; p = 0.021). In older persons, dissatisfaction with general practice is strongly correlated with rising complexity of health problems, independent of age, demographic and clinical parameters. It remains unclear whether complexity of health problems is a patient characteristic influencing the perception of care, or whether the care is unable to handle the demands of these patients. Prospective studies are needed to investigate the causal associations between care organization, patient characteristics, indicators of quality, and patient perceptions.

  15. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  16. Discovering Network Structure Beyond Communities

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2011-11-01

    To understand the formation, evolution, and function of complex systems, it is crucial to understand the internal organization of their interaction networks. Partly due to the impossibility of visualizing large complex networks, resolving network structure remains a challenging problem. Here we overcome this difficulty by combining the visual pattern recognition ability of humans with the high processing speed of computers to develop an exploratory method for discovering groups of nodes characterized by common network properties, including but not limited to communities of densely connected nodes. Without any prior information about the nature of the groups, the method simultaneously identifies the number of groups, the group assignment, and the properties that define these groups. The results of applying our method to real networks suggest the possibility that most group structures lurk undiscovered in the fast-growing inventory of social, biological, and technological networks of scientific interest.

  17. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  18. New Single Piece Blast Hardware design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulrich, Andri; Steinzig, Michael Louis; Aragon, Daniel Adrian

    W, Q and PF engineers and machinists designed and fabricated, on the new Mazak i300, the first Single Piece Blast Hardware (unclassified design shown) reducing fabrication and inspection time by over 50%. The first DU Single Piece is completed and will be used for Hydro Test 3680. Past hydro tests used a twopiece assembly due to a lack of equipment capable of machining the complex saddle shape in a single piece. The i300 provides turning and milling 5-axis machining on one machine. The milling head on the i300 can machine past 90 relative to the spindle axis. This makes itmore » possible to machine the complex saddle surface on a single piece. Going to a single piece eliminates tolerance problems, such as tilting and eccentricity, that typically occurred when assembling the two pieces together« less

  19. Computational physics of the mind

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  20. Injury prevention in Australian Indigenous communities.

    PubMed

    Ivers, Rebecca; Clapham, Kathleen; Senserrick, Teresa; Lyford, Marilyn; Stevenson, Mark

    2008-12-01

    Injury prevention in Indigenous communities in Australia is a continuing national challenge, with Indigenous fatality rates due to injury three times higher than the general population. Suicide and transport are the leading causes of injury mortality, and assault, transport and falls the primary causes of injury morbidity. Addressing the complex range of injury problems in disadvantaged Indigenous communities requires considerable work in building or enhancing existing capacity of communities to address local safety issues. Poor data, lack of funding and absence of targeted programs are some of the issues that impede injury prevention activities. Traditional approaches to injury prevention can be used to highlight key areas of need, however adaptations are needed in keeping with Indigenous peoples' holistic approach to health, linked to land and linked to community in order to address the complex spiritual, emotional and social determinants of Indigenous injury.

  1. Implicit Multibody Penalty-BasedDistributed Contact.

    PubMed

    Xu, Hongyi; Zhao, Yili; Barbic, Jernej

    2014-09-01

    The penalty method is a simple and popular approach to resolving contact in computer graphics and robotics. Penalty-based contact, however, suffers from stability problems due to the highly variable and unpredictable net stiffness, and this is particularly pronounced in simulations with time-varying distributed geometrically complex contact. We employ semi-implicit integration, exact analytical contact gradients, symbolic Gaussian elimination and a SVD solver to simulate stable penalty-based frictional contact with large, time-varying contact areas, involving many rigid objects and articulated rigid objects in complex conforming contact and self-contact. We also derive implicit proportional-derivative control forces for real-time control of articulated structures with loops. We present challenging contact scenarios such as screwing a hexbolt into a hole, bowls stacked in perfectly conforming configurations, and manipulating many objects using actively controlled articulated mechanisms in real time.

  2. Understanding the determinants of problem-solving behavior in a complex environment

    NASA Technical Reports Server (NTRS)

    Casner, Stephen A.

    1994-01-01

    It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.

  3. On Complex Water Conflicts: Role of Enabling Conditions for Pragmatic Resolution

    NASA Astrophysics Data System (ADS)

    Islam, S.; Choudhury, E.

    2016-12-01

    Many of our current and emerging water problems are interconnected and cross boundaries, domains, scales, and sectors. These boundary crossing water problems are neither static nor linear; but often are interconnected nonlinearly with other problems and feedback. The solution space for these complex problems - involving interdependent variables, processes, actors, and institutions - can't be pre-stated. We need to recognize the disconnect among values, interests, and tools as well as problems, policies, and politics. Scientific and technological solutions are desired for efficiency and reliability, but need to be politically feasible and actionable. Governing and managing complex water problems require difficult tradeoffs in exploring and sharing benefits and burdens through carefully crafted negotiation processes. The crafting of such negotiation process, we argue, constitutes a pragmatic approach to negotiation - one that is based on the identification of enabling conditions - as opposed to mechanistic casual explanations, and rooted in contextual conditions to specify and ensure the principles of equity and sustainability. We will use two case studies to demonstrate the efficacy of the proposed principled pragmatic approcah to address complex water problems.

  4. The 3D elliptic restricted three-body problem: periodic orbits which bifurcate from limiting restricted problems. Complex instability

    NASA Astrophysics Data System (ADS)

    Ollé, Mercè; Pacha, Joan R.

    1999-11-01

    In the present work we use certain isolated symmetric periodic orbits found in some limiting Restricted Three-Body Problems to obtain, by numerical continuation, families of symmetric periodic orbits of the more general Spatial Elliptic Restricted Three Body Problem. In particular, the Planar Isosceles Restricted Three Body Problem, the Sitnikov Problem and the MacMillan problem are considered. A stability study for the periodic orbits of the families obtained - specially focused to detect transitions to complex instability - is also made.

  5. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  6. Thresholds of Knowledge Development in Complex Problem Solving: A Multiple-Case Study of Advanced Learners' Cognitive Processes

    ERIC Educational Resources Information Center

    Bogard, Treavor; Liu, Min; Chiang, Yueh-hui Vanessa

    2013-01-01

    This multiple-case study examined how advanced learners solved a complex problem, focusing on how their frequency and application of cognitive processes contributed to differences in performance outcomes, and developing a mental model of a problem. Fifteen graduate students with backgrounds related to the problem context participated in the study.…

  7. The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success

    ERIC Educational Resources Information Center

    Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.

    2016-01-01

    Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…

  8. Healthcare professionals' agreement on clinical relevance of drug-related problems among elderly patients.

    PubMed

    Bech, Christine Flagstad; Frederiksen, Tine; Villesen, Christine Tilsted; Højsted, Jette; Nielsen, Per Rotbøll; Kjeldsen, Lene Juel; Nørgaard, Lotte Stig; Christrup, Lona Louring

    2018-02-01

    Background Disagreement among healthcare professionals on the clinical relevance of drug-related problems can lead to suboptimal treatment and increased healthcare costs. Elderly patients with chronic non-cancer pain and comorbidity are at increased risk of drug related problems compared to other patient groups due to complex medication regimes and transition of care. Objective To investigate the agreement among healthcare professionals on their classification of clinical relevance of drug-related problems in elderly patients with chronic non-cancer pain and comorbidity. Setting Multidisciplinary Pain Centre, Rigshospitalet, Copenhagen, Denmark. Method A pharmacist performed medication review on elderly patients with chronic non-cancer pain and comorbidity, identified their drug-related problems and classified these problems in accordance with an existing categorization system. A five-member clinical panel rated the drug-related problems' clinical relevance in accordance with a five-level rating scale, and their agreement was compared using Fleiss' κ. Main outcome measure Healthcare professionals' agreement on clinical relevance of drug related problems, using Fleiss' κ. Results Thirty patients were included in the study. A total of 162 drug related problems were identified, out of which 54% were of lower clinical relevance (level 0-2) and 46% of higher clinical relevance (level 3-4). Only slight agreement (κ = 0.12) was found between the panellists' classifications of clinical relevance using a five-level rating scale. Conclusion The clinical pharmacist identified drug related problems of lower and higher clinical relevance. Poor overall agreement on the severity of the drug related problems was found among the panelists.

  9. Molecular counting by photobleaching in protein complexes with many subunits: best practices and application to the cellulose synthesis complex

    PubMed Central

    Chen, Yalei; Deffenbaugh, Nathan C.; Anderson, Charles T.; Hancock, William O.

    2014-01-01

    The constituents of large, multisubunit protein complexes dictate their functions in cells, but determining their precise molecular makeup in vivo is challenging. One example of such a complex is the cellulose synthesis complex (CSC), which in plants synthesizes cellulose, the most abundant biopolymer on Earth. In growing plant cells, CSCs exist in the plasma membrane as six-lobed rosettes that contain at least three different cellulose synthase (CESA) isoforms, but the number and stoichiometry of CESAs in each CSC are unknown. To begin to address this question, we performed quantitative photobleaching of GFP-tagged AtCESA3-containing particles in living Arabidopsis thaliana cells using variable-angle epifluorescence microscopy and developed a set of information-based step detection procedures to estimate the number of GFP molecules in each particle. The step detection algorithms account for changes in signal variance due to changing numbers of fluorophores, and the subsequent analysis avoids common problems associated with fitting multiple Gaussian functions to binned histogram data. The analysis indicates that at least 10 GFP-AtCESA3 molecules can exist in each particle. These procedures can be applied to photobleaching data for any protein complex with large numbers of fluorescently tagged subunits, providing a new analytical tool with which to probe complex composition and stoichiometry. PMID:25232006

  10. Molecular counting by photobleaching in protein complexes with many subunits: best practices and application to the cellulose synthesis complex

    DOE PAGES

    Chen, Yalei; Deffenbaugh, Nathan C.; Anderson, Charles T.; ...

    2014-09-17

    The constituents of large, multisubunit protein complexes dictate their functions in cells, but determining their precise molecular makeup in vivo is challenging. One example of such a complex is the cellulose synthesis complex (CSC), which in plants synthesizes cellulose, the most abundant biopolymer on Earth. In growing plant cells, CSCs exist in the plasma membrane as six-lobed rosettes that contain at least three different cellulose synthase (CESA) isoforms, but the number and stoichiometry of CESAs in each CSC are unknown. To begin to address this question, we performed quantitative photobleaching of GFP-tagged AtCESA3-containing particles in living Arabidopsis thaliana cells usingmore » variable-angle epifluorescence microscopy and developed a set of information-based step detection procedures to estimate the number of GFP molecules in each particle. The step detection algorithms account for changes in signal variance due to changing numbers of fluorophores, and the subsequent analysis avoids common problems associated with fitting multiple Gaussian functions to binned histogram data. The analysis indicates that at least 10 GFP-AtCESA3 molecules can exist in each particle. In conclusion, these procedures can be applied to photobleaching data for any protein complex with large numbers of fluorescently tagged subunits, providing a new analytical tool with which to probe complex composition and stoichiometry.« less

  11. A complexity theory model in science education problem solving: random walks for working memory and mental capacity.

    PubMed

    Stamovlasis, Dimitrios; Tsaparlis, Georgios

    2003-07-01

    The present study examines the role of limited human channel capacity from a science education perspective. A model of science problem solving has been previously validated by applying concepts and tools of complexity theory (the working memory, random walk method). The method correlated the subjects' rank-order achievement scores in organic-synthesis chemistry problems with the subjects' working memory capacity. In this work, we apply the same nonlinear approach to a different data set, taken from chemical-equilibrium problem solving. In contrast to the organic-synthesis problems, these problems are algorithmic, require numerical calculations, and have a complex logical structure. As a result, these problems cause deviations from the model, and affect the pattern observed with the nonlinear method. In addition to Baddeley's working memory capacity, the Pascual-Leone's mental (M-) capacity is examined by the same random-walk method. As the complexity of the problem increases, the fractal dimension of the working memory random walk demonstrates a sudden drop, while the fractal dimension of the M-capacity random walk decreases in a linear fashion. A review of the basic features of the two capacities and their relation is included. The method and findings have consequences for problem solving not only in chemistry and science education, but also in other disciplines.

  12. Approaches to Managing Autoimmune Cytopenias in Novel Immunological Disorders with Genetic Underpinnings Like Autoimmune Lymphoproliferative Syndrome

    PubMed Central

    Rao, V. Koneti

    2015-01-01

    Autoimmune lymphoproliferative syndrome (ALPS) is a rare disorder of apoptosis. It is frequently caused by mutations in FAS (TNFRSF6) gene. Unlike most of the self-limiting autoimmune cytopenias sporadically seen in childhood, multi lineage cytopenias due to ALPS are often refractory, as their inherited genetic defect is not going to go away. Historically, more ALPS patients have died due to overwhelming sepsis following splenectomy to manage their chronic cytopenias than due to any other cause, including malignancies. Hence, current recommendations underscore the importance of avoiding splenectomy in ALPS, by long-term use of corticosteroid-sparing immunosuppressive agents like mycophenolate mofetil and sirolimus. Paradigms learnt from managing ALPS patients in recent years is highlighted here and can be extrapolated to manage refractory cytopenias in patients with as yet undetermined genetic bases for their ailments. It is also desirable to develop international registries for children with rare and complex immune problems associated with chronic multilineage cytopenias in order to elucidate their natural history and long-term comorbidities due to the disease and its treatments. PMID:26258116

  13. Can Undergraduates Be Transdisciplinary? Promoting Transdisciplinary Engagement through Global Health Problem-Based Learning

    ERIC Educational Resources Information Center

    Hay, M. Cameron

    2017-01-01

    Undergraduate student learning focuses on the development of disciplinary strength in majors and minors so that students gain depth in particular fields, foster individual expertise, and learn problem solving from disciplinary perspectives. However, the complexities of real-world problems do not respect disciplinary boundaries. Complex problems…

  14. The Process of Solving Complex Problems

    ERIC Educational Resources Information Center

    Fischer, Andreas; Greiff, Samuel; Funke, Joachim

    2012-01-01

    This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…

  15. Communities of Practice: A New Approach to Solving Complex Educational Problems

    ERIC Educational Resources Information Center

    Cashman, J.; Linehan, P.; Rosser, M.

    2007-01-01

    Communities of Practice offer state agency personnel a promising approach for engaging stakeholder groups in collaboratively solving complex and, often, persistent problems in special education. Communities of Practice can help state agency personnel drive strategy, solve problems, promote the spread of best practices, develop members'…

  16. 6 Essential Questions for Problem Solving

    ERIC Educational Resources Information Center

    Kress, Nancy Emerson

    2017-01-01

    One of the primary expectations that the author has for her students is for them to develop greater independence when solving complex and unique mathematical problems. The story of how the author supports her students as they gain confidence and independence with complex and unique problem-solving tasks, while honoring their expectations with…

  17. Students' and Teachers' Conceptual Metaphors for Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Yee, Sean P.

    2017-01-01

    Metaphors are regularly used by mathematics teachers to relate difficult or complex concepts in classrooms. A complex topic of concern in mathematics education, and most STEM-based education classes, is problem solving. This study identified how students and teachers contextualize mathematical problem solving through their choice of metaphors.…

  18. How Students Process Equations in Solving Quantitative Synthesis Problems? Role of Mathematical Complexity in Students' Mathematical Performance

    ERIC Educational Resources Information Center

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-01-01

    We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking,…

  19. Sensor placement in nuclear reactors based on the generalized empirical interpolation method

    NASA Astrophysics Data System (ADS)

    Argaud, J.-P.; Bouriquet, B.; de Caso, F.; Gong, H.; Maday, Y.; Mula, O.

    2018-06-01

    In this paper, we apply the so-called generalized empirical interpolation method (GEIM) to address the problem of sensor placement in nuclear reactors. This task is challenging due to the accumulation of a number of difficulties like the complexity of the underlying physics and the constraints in the admissible sensor locations and their number. As a result, the placement, still today, strongly relies on the know-how and experience of engineers from different areas of expertise. The present methodology contributes to making this process become more systematic and, in turn, simplify and accelerate the procedure.

  20. Experimental characterization of composites. [load test methods

    NASA Technical Reports Server (NTRS)

    Bert, C. W.

    1975-01-01

    The experimental characterization for composite materials is generally more complicated than for ordinary homogeneous, isotropic materials because composites behave in a much more complex fashion, due to macroscopic anisotropic effects and lamination effects. Problems concerning the static uniaxial tension test for composite materials are considered along with approaches for conducting static uniaxial compression tests and static uniaxial bending tests. Studies of static shear properties are discussed, taking into account in-plane shear, twisting shear, and thickness shear. Attention is given to static multiaxial loading, systematized experimental programs for the complete characterization of static properties, and dynamic properties.

  1. A Review of Computational Intelligence Methods for Eukaryotic Promoter Prediction.

    PubMed

    Singh, Shailendra; Kaur, Sukhbir; Goel, Neelam

    2015-01-01

    In past decades, prediction of genes in DNA sequences has attracted the attention of many researchers but due to its complex structure it is extremely intricate to correctly locate its position. A large number of regulatory regions are present in DNA that helps in transcription of a gene. Promoter is one such region and to find its location is a challenging problem. Various computational methods for promoter prediction have been developed over the past few years. This paper reviews these promoter prediction methods. Several difficulties and pitfalls encountered by these methods are also detailed, along with future research directions.

  2. Metagenomic Assembly: Overview, Challenges and Applications

    PubMed Central

    Ghurye, Jay S.; Cepeda-Espinoza, Victoria; Pop, Mihai

    2016-01-01

    Advances in sequencing technologies have led to the increased use of high throughput sequencing in characterizing the microbial communities associated with our bodies and our environment. Critical to the analysis of the resulting data are sequence assembly algorithms able to reconstruct genes and organisms from complex mixtures. Metagenomic assembly involves new computational challenges due to the specific characteristics of the metagenomic data. In this survey, we focus on major algorithmic approaches for genome and metagenome assembly, and discuss the new challenges and opportunities afforded by this new field. We also review several applications of metagenome assembly in addressing interesting biological problems. PMID:27698619

  3. Rotor dynamic considerations for large wind power generator systems

    NASA Technical Reports Server (NTRS)

    Ormiston, R. A.

    1973-01-01

    Successful large, reliable, low maintenance wind turbines must be designed with full consideration for minimizing dynamic response to aerodynamic, inertial, and gravitational forces. Much of existing helicopter rotor technology is applicable to this problem. Compared with helicopter rotors, large wind turbines are likely to be relatively less flexible with higher dimensionless natural frequencies. For very large wind turbines, low power output per unit weight and stresses due to gravitational forces are limiting factors. The need to reduce rotor complexity to a minimum favors the use of cantilevered (hingeless) rotor configurations where stresses are relieved by elastic deformations.

  4. A Cellular Automata Model of Infection Control on Medical Implants

    PubMed Central

    Prieto-Langarica, Alicia; Kojouharov, Hristo; Chen-Charpentier, Benito; Tang, Liping

    2011-01-01

    S. epidermidis infections on medically implanted devices are a common problem in modern medicine due to the abundance of the bacteria. Once inside the body, S. epidermidis gather in communities called biofilms and can become extremely hard to eradicate, causing the patient serious complications. We simulate the complex S. epidermidis-Neutrophils interactions in order to determine the optimum conditions for the immune system to be able to contain the infection and avoid implant rejection. Our cellular automata model can also be used as a tool for determining the optimal amount of antibiotics for combating biofilm formation on medical implants. PMID:23543851

  5. Robust intelligent flight control for hypersonic vehicles. Ph.D. Thesis - Massachusetts Inst. of Technology

    NASA Technical Reports Server (NTRS)

    Chamitoff, Gregory Errol

    1992-01-01

    Intelligent optimization methods are applied to the problem of real-time flight control for a class of airbreathing hypersonic vehicles (AHSV). The extreme flight conditions that will be encountered by single-stage-to-orbit vehicles, such as the National Aerospace Plane, present a tremendous challenge to the entire spectrum of aerospace technologies. Flight control for these vehicles is particularly difficult due to the combination of nonlinear dynamics, complex constraints, and parametric uncertainty. An approach that utilizes all available a priori and in-flight information to perform robust, real time, short-term trajectory planning is presented.

  6. Kernel canonical-correlation Granger causality for multiple time series

    NASA Astrophysics Data System (ADS)

    Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu

    2011-04-01

    Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.

  7. Improving Quantum Gate Simulation using a GPU

    NASA Astrophysics Data System (ADS)

    Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.

    2008-11-01

    Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.

  8. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  9. Laser-induced dissociation processes of protonated glucose: dehydration reactions vs cross-ring dissociation

    NASA Astrophysics Data System (ADS)

    Dyakov, Y. A.; Kazaryan, M. A.; Golubkov, M. G.; Gubanova, D. P.; Bulychev, N. A.; Kazaryan, S. M.

    2018-04-01

    Studying the processes occurring in biological systems under irradiation is critically important for understanding the principles of working of biological systems. One of the main problems, which stimulate interest to the processes of photo-induced excitation and ionization of biomolecules, is the necessity of their identification by various mass spectrometry (MS) methods. While simple analysis of small molecules became a standard MS technique long time ago, recognition of large molecules, especially carbohydrates, is still a difficult problem, and requires sophisticated techniques and complicated computer analysis. Due to the large variety of substances in the samples, as far as the complexity of the processes occurring after excitation/ionization of the molecules, the recognition efficiency of MS technique in terms of carbohydrates is still not high enough. Additional theoretical and experimental analysis of ionization and dissociation processes in various kinds of polysaccharides, beginning from the simplest ones, is necessary. In our work, we extent previous theoretical and experimental studies of saccharides, and concentrate our attention to protonated glucose. In this article we paid the most attention to the cross-ring dissociation and water loss reactions due to their importance for identification of various isomers of hydrocarbon molecules (for example, distinguish α- and β-glucose).

  10. Wind turbines using self-excited three-phase induction generators: an innovative solution for voltage-frequency control

    NASA Astrophysics Data System (ADS)

    Brudny, J. F.; Pusca, R.; Roisse, H.

    2008-08-01

    A considerable number of communities throughout the world, most of them isolated, need hybrid energy solutions either for rural electrification or for the reduction of diesel use. Despite several research projects and demonstrations which have been conducted in recent years, wind-diesel technology remains complex and much too costly. Induction generators are the most robust and common for wind energy systems but this option is a serious challenge for electrical regulation. When a wind turbine is used in an off-grid configuration, either continuously or intermittently, precise and robust regulation is difficult to attain. The voltage parameter regulation option, as was experienced at several remote sites (on islands and in the arctic for example), is a safe, reliable and relatively simple technology, but does not optimize the wave quality and creates instabilities. These difficulties are due to the fact that no theory is available to describe the system, due to the inverse nature of the problem. In order to address and solve the problem of the unstable operation of this wind turbine generator, an innovative approach is described, based on a different induction generator single phase equivalent circuit.

  11. The artificial object detection and current velocity measurement using SAR ocean surface images

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Strotov, Valery; Ershov, Maksim; Muraviev, Vadim; Feldman, Alexander; Smirnov, Sergey

    2017-10-01

    Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image processing can be used to solve many problems arising in this field of research. This paper discusses some of them including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps: image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm consists of the following main steps: prescreening, land masking, image segmentation combined with parameter measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is concluded that the proposed approaches can be used in maritime applications.

  12. The effects of monitoring environment on problem-solving performance.

    PubMed

    Laird, Brian K; Bailey, Charles D; Hester, Kim

    2018-01-01

    While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.

  13. Teaching Problem Solving; the Effect of Algorithmic and Heuristic Problem Solving Training in Relation to Task Complexity and Relevant Aptitudes.

    ERIC Educational Resources Information Center

    de Leeuw, L.

    Sixty-four fifth and sixth-grade pupils were taught number series extrapolation by either an algorithm, fully prescribed problem-solving method or a heuristic, less prescribed method. The trained problems were within categories of two degrees of complexity. There were 16 subjects in each cell of the 2 by 2 design used. Aptitude Treatment…

  14. Incremental validity of estimated cannabis grams as a predictor of problems and cannabinoid biomarkers: Evidence from a clinical trial.

    PubMed

    Tomko, Rachel L; Baker, Nathaniel L; McClure, Erin A; Sonne, Susan C; McRae-Clark, Aimee L; Sherman, Brian J; Gray, Kevin M

    2018-01-01

    Quantifying cannabis use is complex due to a lack of a standardized packaging system that contains specified amounts of constituents. A laboratory procedure has been developed for estimating physical quantity of cannabis use by utilizing a surrogate substance to represent cannabis, and weighing the amount of the surrogate to determine typical use in grams. This secondary analysis utilized data from a multi-site, randomized, controlled pharmacological trial for adult cannabis use disorder (N=300), sponsored by the National Drug Abuse Treatment Clinical Trials Network, to test the incremental validity of this procedure. In conjunction with the Timeline Followback, this physical scale-based procedure was used to determine whether average grams per cannabis administration predicted urine cannabinoid levels (11-nor-9-carboxy-Δ9-tetrahydrocannabinol) and problems due to use, after accounting for self-reported number of days used (in the past 30 days) and number of administrations per day in a 12-week clinical trial for cannabis use disorder. Likelihood ratio tests suggest that model fit was significantly improved when grams per administration and relevant interactions were included in the model predicting urine cannabinoid level (X 2 =98.3; p<0.05) and in the model predicting problems due to cannabis use (X 2 =6.4; p<0.05), relative to a model that contained only simpler measures of quantity and frequency. This study provides support for the use of a scale-based method for quantifying cannabis use in grams. This methodology may be useful when precise quantification is necessary (e.g., measuring reduction in use in a clinical trial). Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Evolution of plastic anisotropy for high-strain-rate computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.; Maudlin, P.J.

    1994-12-01

    A model for anisotropic material strength, and for changes in the anisotropy due to plastic strain, is described. This model has been developed for use in high-rate, explicit, Lagrangian multidimensional continuum-mechanics codes. The model handles anisotropies in single-phase materials, in particular the anisotropies due to crystallographic texture--preferred orientations of the single-crystal grains. Textural anisotropies, and the changes in these anisotropies, depend overwhelmingly no the crystal structure of the material and on the deformation history. The changes, particularly for a complex deformations, are not amenable to simple analytical forms. To handle this problem, the material model described here includes a texturemore » code, or micromechanical calculation, coupled to a continuum code. The texture code updates grain orientations as a function of tensor plastic strain, and calculates the yield strength in different directions. A yield function is fitted to these yield points. For each computational cell in the continuum simulation, the texture code tracks a particular set of grain orientations. The orientations will change due to the tensor strain history, and the yield function will change accordingly. Hence, the continuum code supplies a tensor strain to the texture code, and the texture code supplies an updated yield function to the continuum code. Since significant texture changes require relatively large strains--typically, a few percent or more--the texture code is not called very often, and the increase in computer time is not excessive. The model was implemented, using a finite-element continuum code and a texture code specialized for hexagonal-close-packed crystal structures. The results for several uniaxial stress problems and an explosive-forming problem are shown.« less

  16. Optimized cross-resonance gate for coupled transmon systems

    NASA Astrophysics Data System (ADS)

    Kirchhoff, Susanna; Keßler, Torsten; Liebermann, Per J.; Assémat, Elie; Machnes, Shai; Motzoi, Felix; Wilhelm, Frank K.

    2018-04-01

    The cross-resonance (CR) gate is an entangling gate for fixed-frequency superconducting qubits. While being simple and extensible, it is comparatively slow, at 160 ns, and thus of limited fidelity due to on-going incoherent processes. Using two different optimal control algorithms, we estimate the quantum speed limit for a controlled-not cnot gate in this system to be 10 ns, indicating a potential for great improvements. We show that the ability to approach this limit depends strongly on the choice of ansatz used to describe optimized control pulses and limitations placed on their complexity. Using a piecewise-constant ansatz, with a single carrier and bandwidth constraints, we identify an experimentally feasible 70-ns pulse shape. Further, an ansatz based on the two dominant frequencies involved in the optimal control problem allows for an optimal solution more than twice as fast again, at under 30 ns, with smooth features and limited complexity. This is twice as fast as gate realizations using tunable-frequency, resonantly coupled qubits. Compared to current CR-gate implementations, we project our scheme will provide a sixfold speed-up and thus a sixfold reduction in fidelity loss due to incoherent effects.

  17. I feel you-monitoring environmental variables related to asthma in an integrated real-time frame.

    PubMed

    Berenguer, Anabela Gonçalves

    2015-09-11

    The study of asthma and other complex diseases has proven to be a "moving target" for researchers due to its complex aetiology, difficulty in definition, and immeasurable environmental effects. A large number of studies regarding the contribution of both genetic and environmental factors often result in contradictory results, in part due to the highly heterogeneous nature of asthma. Recent literature has focused on the epigenetic signatures of asthma caused by environmental factors, highlighting the importance of environment. However, unlike the genetic techniques, environmental assessment still lacks accuracy. A plausible solution for this problem would be an individual-based environmental exposure assessment, relying on new technologies such as personal real-time environmental sensors. This could prove to enable the assessment of the whole environmental exposure-or exposome-matching in terms of precision the genome that is emphasized in most studies so far. In addition, the measurement of the whole array of biological molecules, in response to the environment action, could help understand the context of the disease. The current perspective comprises a beyond-genetics integrated vision of omics technology coupled with real-time environmental measures targeting to enhance our comprehension of the disease genesis.

  18. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    PubMed

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  19. Ulysses, one year after the launch

    NASA Astrophysics Data System (ADS)

    Petersen, H.

    1991-12-01

    Ulysses is currently one year underway in a huge heliocentric orbit. A late change in some of the blankets' external material was required to prevent electrical charging due to contamination by nozzle outgassing products. Test results are shown, governing various ranges of plasma parameters and sample temperatures. Even clean materials show a few volts charging due to imperfections in the conductive film. Thermal environment in the Shuttle cargo bay proved to be slightly different from prelaunch predictions: less warm with doors closed, and less cold with doors opened. Temperatures experienced in orbit are nominal. A problem was caused by a complex interaction of a Sun induced thermal gradient in a sensitive boom on the dynamic stability of the spacecraft. A user interface program was an invaluable tool to ease computations with the mathematical models, eliminate error risk and provide configuration control.

  20. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  1. Identifying problems and generating recommendations for enhancing complex systems: applying the abstraction hierarchy framework as an analytical tool.

    PubMed

    Xu, Wei

    2007-12-01

    This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.

  2. Evaluation of coastal management: Study case in the province of Alicante, Spain.

    PubMed

    Palazón, A; Aragonés, L; López, I

    2016-12-01

    The beaches are complex systems that can be studied from different points of view and meet more than a mission to protect the coast. Their management consists of assigning solutions to problems and for this to be correct all factors involved have to be taken into account. In order to understand how management is done on the coast of the province of Alicante, surveys have been conducted among the managers of the 19 coastal municipalities of Alicante coast, covering the 91 beaches. The aim of the surveys is to try to know the problems and situations relating to the management, depending on different factors such as the level of urbanization and type of sediment. In addition, it has been investigated whether this management is aimed to protect the coastline, maintain the flora and fauna or is just a recreational management since the main economic activity is tourism. The beaches are conceived of as products offered to the user, which is what most concerns its economic importance in an area where the sun and beach tourism has a special share of the GDP. The ignorance as to the major problems regarding their physical functioning and the inability to solve them is due to a complex administrative system with which the coastal system is regulated inefficiently. The integral approach is essential for a complete and effective management of the coastal environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    NASA Astrophysics Data System (ADS)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  4. Printed circuit boards: a review on the perspective of sustainability.

    PubMed

    Canal Marques, André; Cabrera, José-María; Malfatti, Célia de Fraga

    2013-12-15

    Modern life increasingly requires newer equipments and more technology. In addition, the fact that society is highly consumerist makes the amount of discarded equipment as well as the amount of waste from the manufacture of new products increase at an alarming rate. Printed circuit boards, which form the basis of the electronics industry, are technological waste of difficult disposal whose recycling is complex and expensive due to the diversity of materials and components and their difficult separation. Currently, printed circuit boards have a fixing problem, which is migrating from traditional Pb-Sn alloys to lead-free alloys without definite choice. This replacement is an attempt to minimize the problem of Pb toxicity, but it does not change the problem of separation of the components for later reuse and/or recycling and leads to other problems, such as temperature rise, delamination, flaws, risks of mechanical shocks and the formation of "whiskers". This article presents a literature review on printed circuit boards, showing their structure and materials, the environmental problem related to the board, some the different alternatives for recycling, and some solutions that are being studied to reduce and/or replace the solder, in order to minimize the impact of solder on the printed circuit boards. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Error, blame, and the law in health care--an antipodean perspective.

    PubMed

    Runciman, William B; Merry, Alan F; Tito, Fiona

    2003-06-17

    Patients are frequently harmed by problems arising from the health care process itself. Addressing these problems requires understanding the role of errors, violations, and system failures in their genesis. Problem-solving is inhibited by a tendency to blame those involved, often inappropriately. This has been aggravated by the need to attribute blame before compensation can be obtained through tort and the human failing of attributing blame simply because there has been a serious outcome. Blaming and punishing for errors that are made by well-intentioned people working in the health care system drives the problem of iatrogenic harm underground and alienates people who are best placed to prevent such problems from recurring. On the other hand, failure to assign blame when it is due is also undesirable and erodes trust in the medical profession. Understanding the distinction between blameworthy behavior and inevitable human errors and appreciating the systemic factors that underlie most failures in complex systems are essential for the response to a harmed patient to be informed, fair, and effective in improving safety. It is important to meet society's needs to blame and exact retribution when appropriate. However, this should not be a prerequisite for compensation, which should be appropriately structured, fair, timely, and, ideally, properly funded as an intrinsic part of health care and social security systems.

  6. Transformations of software design and code may lead to reduced errors

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1983-01-01

    The capability of programmers and non-programmers to specify problem solutions by developing example-solutions and also for the programmers by writing computer programs was investigated; each method of specification was accomplished at various levels of problem complexity. The level of difficulty of each problem was reflected by the number of steps needed by the user to develop a solution. Machine processing of the user inputs permitted inferences to be developed about the algorithms required to solve a particular problem. The interactive feedback of processing results led users to a more precise definition of the desired solution. Two participant groups (programmers and bookkeepers/accountants) working with three levels of problem complexity and three levels of processor complexity were used. The experimental task employed required specification of a logic for solution of a Navy task force problem.

  7. How can we improve problem solving in undergraduate biology? Applying lessons from 30 years of physics education research.

    PubMed

    Hoskinson, A-M; Caballero, M D; Knight, J K

    2013-06-01

    If students are to successfully grapple with authentic, complex biological problems as scientists and citizens, they need practice solving such problems during their undergraduate years. Physics education researchers have investigated student problem solving for the past three decades. Although physics and biology problems differ in structure and content, the instructional purposes align closely: explaining patterns and processes in the natural world and making predictions about physical and biological systems. In this paper, we discuss how research-supported approaches developed by physics education researchers can be adopted by biologists to enhance student problem-solving skills. First, we compare the problems that biology students are typically asked to solve with authentic, complex problems. We then describe the development of research-validated physics curricula emphasizing process skills in problem solving. We show that solving authentic, complex biology problems requires many of the same skills that practicing physicists and biologists use in representing problems, seeking relationships, making predictions, and verifying or checking solutions. We assert that acquiring these skills can help biology students become competent problem solvers. Finally, we propose how biology scholars can apply lessons from physics education in their classrooms and inspire new studies in biology education research.

  8. Solution of Inverse Kinematics for 6R Robot Manipulators With Offset Wrist Based on Geometric Algebra.

    PubMed

    Fu, Zhongtao; Yang, Wenyu; Yang, Zhen

    2013-08-01

    In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.

  9. Solving complex band structure problems with the FEAST eigenvalue algorithm

    NASA Astrophysics Data System (ADS)

    Laux, S. E.

    2012-08-01

    With straightforward extension, the FEAST eigenvalue algorithm [Polizzi, Phys. Rev. B 79, 115112 (2009)] is capable of solving the generalized eigenvalue problems representing traveling-wave problems—as exemplified by the complex band-structure problem—even though the matrices involved are complex, non-Hermitian, and singular, and hence outside the originally stated range of applicability of the algorithm. The obtained eigenvalues/eigenvectors, however, contain spurious solutions which must be detected and removed. The efficiency and parallel structure of the original algorithm are unaltered. The complex band structures of Si layers of varying thicknesses and InAs nanowires of varying radii are computed as test problems.

  10. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

  11. Integrated research in natural resources: the key role of problem framing.

    Treesearch

    Roger N. Clark; George H. Stankey

    2006-01-01

    Integrated research is about achieving holistic understanding of complex biophysical and social issues and problems. It is driven by the need to improve understanding about such systems and to improve resource management by using the results of integrated research processes.Traditional research tends to fragment complex problems, focusing more on the pieces...

  12. An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups

    ERIC Educational Resources Information Center

    Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi

    2012-01-01

    The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…

  13. Investigation of model-based physical design restrictions (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl

    2005-05-01

    As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.

  14. An efficient hybrid technique in RCS predictions of complex targets at high frequencies

    NASA Astrophysics Data System (ADS)

    Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe

    2017-09-01

    Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.

  15. Tracking of Maneuvering Complex Extended Object with Coupled Motion Kinematics and Extension Dynamics Using Range Extent Measurements

    PubMed Central

    Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin

    2017-01-01

    The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629

  16. The solution of the optimization problem of small energy complexes using linear programming methods

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.; Director, L. B.

    2016-11-01

    Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.

  17. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  18. Towards a dynamical scheduler for ALMA: a science - software collaboration

    NASA Astrophysics Data System (ADS)

    Avarias, Jorge; Toledo, Ignacio; Espada, Daniel; Hibbard, John; Nyman, Lars-Ake; Hiriart, Rafael

    2016-07-01

    State-of-the art astronomical facilities are costly to build and operate, hence it is essential that these facilities must be operated as much efficiently as possible, trying to maximize the scientific output and at the same time minimizing overhead times. Over the latest decades the scheduling problem has drawn attention of research because new facilities have been demonstrated that is unfeasible to try to schedule observations manually, due the complexity to satisfy the astronomical and instrumental constraints and the number of scientific proposals to be reviewed and evaluated in near real-time. In addition, the dynamic nature of some constraints make this problem even more difficult. The Atacama Large Millimeter/submillimeter Array (ALMA) is a major collaboration effort between European (ESO), North American (NRAO) and East Asian countries (NAOJ), under operations on the Chilean Chajnantor plateau, at 5.000 meters of altitude. During normal operations at least two independent arrays are available, aiming to achieve different types of science. Since ALMA does not observe in the visible spectrum, observations are not limited to night time only, thus a 24/7 operation with little downtime as possible is expected when full operations state will have been reached. However, during preliminary operations (early-science) ALMA has been operated on tied schedules using around half of the whole day-time to conduct scientific observations. The purpose of this paper is to explain how the observation scheduling and its optimization is done within ALMA, giving details about the problem complexity, its similarities and differences with traditional scheduling problems found in the literature. The paper delves into the current recommendation system implementation and the difficulties found during the road to its deployment in production.

  19. Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications

    PubMed Central

    Stoppe, Jannis; Drechsler, Rolf

    2015-01-01

    The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC. PMID:25946632

  20. Analyzing SystemC Designs: SystemC Analysis Approaches for Varying Applications.

    PubMed

    Stoppe, Jannis; Drechsler, Rolf

    2015-05-04

    The complexity of hardware designs is still increasing according to Moore's law. With embedded systems being more and more intertwined and working together not only with each other, but also with their environments as cyber physical systems (CPSs), more streamlined development workflows are employed to handle the increasing complexity during a system's design phase. SystemC is a C++ library for the design of hardware/software systems, enabling the designer to quickly prototype, e.g., a distributed CPS without having to decide about particular implementation details (such as whether to implement a feature in hardware or in software) early in the design process. Thereby, this approach reduces the initial implementation's complexity by offering an abstract layer with which to build a working prototype. However, as SystemC is based on C++, analyzing designs becomes a difficult task due to the complex language features that are available to the designer. Several fundamentally different approaches for analyzing SystemC designs have been suggested. This work illustrates several different SystemC analysis approaches, including their specific advantages and shortcomings, allowing designers to pick the right tools to assist them with a specific problem during the design of a system using SystemC.

  1. A study of the structure of the ν1(HF) absorption band of the СH3СN…HF complex

    NASA Astrophysics Data System (ADS)

    Gromova, E. I.; Glazachev, E. V.; Bulychev, V. P.; Koshevarnikov, A. M.; Tokhadze, K. G.

    2015-09-01

    The ν1(HF) absorption band shape of the CH3CN…HF complex is studied in the gas phase at a temperature of 293 K. The spectra of gas mixtures CH3CN/HF are recorded in the region of 4000-3400 cm-1 at a resolution from 0.1 to 0.005 cm-1 with a Bruker IFS-120 HR vacuum Fourier spectrometer in a cell 10 cm in length with wedge-shaped sapphire windows. The procedure used to separate the residual water absorption allows more than ten fine-structure bands to be recorded on the low-frequency wing of the ν1(HF) band. It is shown that the fine structure of the band is formed primarily due to hot transitions from excited states of the low-frequency ν7 librational vibration. Geometrical parameters of the equilibrium nuclear configuration, the binding energy, and the dipole moment of the complex are determined from a sufficiently accurate quantum-chemical calculation. The frequencies and intensities for a number of spectral transitions of this complex are obtained in the harmonic approximation and from variational solutions of anharmonic vibrational problems.

  2. Cynefin as Reference Framework to Facilitate Insight and Decision-Making in Complex Contexts of Biomedical Research.

    PubMed

    Kempermann, Gerd

    2017-01-01

    The Cynefin scheme is a concept of knowledge management, originally devised to support decision making in management, but more generally applicable to situations, in which complexity challenges the quality of insight, prediction, and decision. Despite the fact that life itself, and especially the brain and its diseases, are complex to the extent that complexity could be considered their cardinal feature, complex problems in biomedicine are often treated as if they were actually not more than the complicated sum of solvable sub-problems. Because of the emergent properties of complex contexts this is not correct. With a set of clear criteria Cynefin helps to set apart complex problems from "simple/obvious," "complicated," "chaotic," and "disordered" contexts in order to avoid misinterpreting the relevant causality structures. The distinction comes with the insight, which specific kind of knowledge is possible in each of these categories and what are the consequences for resulting decisions and actions. From student's theses over the publication and grant writing process to research politics, misinterpretation of complexity can have problematic or even dangerous consequences, especially in clinical contexts. Conceptualization of problems within a straightforward reference language like Cynefin improves clarity and stringency within projects and facilitates communication and decision-making about them.

  3. Prader-Willi Syndrome: Obesity due to Genomic Imprinting

    PubMed Central

    Butler, Merlin G

    2011-01-01

    Prader-Willi syndrome (PWS) is a complex neurodevelopmental disorder due to errors in genomic imprinting with loss of imprinted genes that are paternally expressed from the chromosome 15q11-q13 region. Approximately 70% of individuals with PWS have a de novo deletion of the paternally derived 15q11-q13 region in which there are two subtypes (i.e., larger Type I or smaller Type II), maternal disomy 15 (both 15s from the mother) in about 25% of cases, and the remaining subjects have either defects in the imprinting center controlling the activity of imprinted genes or due to other chromosome 15 rearrangements. PWS is characterized by a particular facial appearance, infantile hypotonia, a poor suck and feeding difficulties, hypogonadism and hypogenitalism in both sexes, short stature and small hands and feet due to growth hormone deficiency, mild learning and behavioral problems (e.g., skin picking, temper tantrums) and hyperphagia leading to early childhood obesity. Obesity is a significant health problem, if uncontrolled. PWS is considered the most common known genetic cause of morbid obesity in children. The chromosome 15q11-q13 region contains approximately 100 genes and transcripts in which about 10 are imprinted and paternally expressed. This region can be divided into four groups: 1) a proximal non-imprinted region; 2) a PWS paternal-only expressed region containing protein-coding and non-coding genes; 3) an Angelman syndrome region containing maternally expressed genes and 4) a distal non-imprinted region. This review summarizes the current understanding of the genetic causes, the natural history and clinical presentation of individuals with PWS. PMID:22043168

  4. Development a heuristic method to locate and allocate the medical centers to minimize the earthquake relief operation time.

    PubMed

    Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan

    2013-01-01

    Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.

  5. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  6. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  7. Multi-scale modeling of tsunami flows and tsunami-induced forces

    NASA Astrophysics Data System (ADS)

    Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.

    2016-12-01

    The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.

  8. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  9. Improvement of DGGE analysis by modifications of PCR protocols for analysis of microbial community members with low abundance.

    PubMed

    Wang, Yong-Feng; Zhang, Fang-Qiu; Gu, Ji-Dong

    2014-06-01

    Denaturing gradient gel electrophoresis (DGGE) is a powerful technique to reveal the community structures and composition of microorganisms in complex natural environments and samples. However, positive and reproducible polymerase chain reaction (PCR) products, which are difficult to acquire for some specific samples due to low abundance of the target microorganisms, significantly impair the effective applications of DGGE. Thus, nested PCR is often introduced to generate positive PCR products from the complex samples, but one problem is also introduced: The total number of thermocycling in nested PCR is usually unacceptably high, which results in skewed community structures by generation of random or mismatched PCR products on the DGGE gel, and this was demonstrated in this study. Furthermore, nested PCR could not resolve the uneven representative issue with PCR products of complex samples with unequal richness of microbial population. In order to solve the two problems in nested PCR, the general protocol was modified and improved in this study. Firstly, a general PCR procedure was used to amplify the target genes with the PCR primers without any guanine cytosine (GC) clamp, and then, the resultant PCR products were purified and diluted to 0.01 μg ml(-1). Subsequently, the diluted PCR products were utilized as templates to amplify again with the same PCR primers with the GC clamp for 17 cycles, and the products were finally subjected to DGGE analysis. We demonstrated that this is a much more reliable approach to obtain a high quality DGGE profile with high reproducibility. Thus, we recommend the adoption of this improved protocol in analyzing microorganisms of low abundance in complex samples when applying the DGGE fingerprinting technique to avoid biased results.

  10. General theory for multiple input-output perturbations in complex molecular systems. 1. Linear QSPR electronegativity models in physical, organic, and medicinal chemistry.

    PubMed

    González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature, temperature of addition, or time of reaction). The model also account for changes in chemical structure (connectivity structure and/or chirality paterns in substrate, product, electrophile agent, organolithium, and ligand of the asymmetric catalyst). The second model classifies more than 150,000 cases with 85-100% of Ac, Sn, and Sp. The data contains experimental shifts in up to 18 different pharmacological parameters determined in >3000 assays of ADMET (Absorption, Distribution, Metabolism, Elimination, and Toxicity) properties and/or interactions between 31723 drugs and 100 targets (metabolizing enzymes, drug transporters, or organisms). The third model classifies more than 260,000 cases of perturbations in the self-aggregation of drugs and surfactants to form micelles with Ac, Sn, and Sp of 94-95%. The model predicts changes in 8 physicochemical and/or thermodynamics output parameters (critic micelle concentration, aggregation number, degree of ionization, surface area, enthalpy, free energy, entropy, heat capacity) of self-aggregation due to perturbations. The perturbations refers to changes in initial temperature, solvent, salt, salt concentration, solvent, and/or structure of the anion or cation of more than 150 different drugs and surfactants. QSPR-Perturbation Theory models may be useful for multi-objective optimization of organic synthesis, physicochemical properties, biological activity, metabolism, and distribution profiles towards the design of new drugs, surfactants, asymmetric ligands for catalysts, and other materials.

  11. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  12. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  13. A computational model of the human hand 93-ERI-053

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack ofmore » biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.« less

  14. Shielding and activation calculations around the reactor core for the MYRRHA ADS design

    NASA Astrophysics Data System (ADS)

    Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.

    2017-09-01

    In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.

  15. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    NASA Astrophysics Data System (ADS)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  16. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  17. Dynamics of inductors for heating of the metal under deformation

    NASA Astrophysics Data System (ADS)

    Zimin, L. S.; Yeghiazaryan, A. S.; Protsenko, A. N.

    2018-01-01

    Current issues of creating powerful systems for hot sheet rolling with induction heating application in mechanical engineering and metallurgy were discussed. Electrodynamical and vibroacoustic problems occurring due to the induction heating of objects with complex shapes, particularly the slabs heating prior to rolling, were analysed. The numerical mathematical model using the method of related contours and the principle of virtual displacements is recommended for electrodynamical calculations. For the numerical solution of the vibrational problem, it is reasonable to use the finite element method (FEM). In general, for calculating the distribution forces, the law of Biot-Savart-Laplace method providing the determination of the current density of the skin layer in slab was used. The form of the optimal design of the inductor based on maximum hardness was synthesized while researching the vibrodynamic model of the system "inductor-metal" which provided allowable sound level meeting all established sanitary standards.

  18. Improved method for predicting protein fold patterns with ensemble classifiers.

    PubMed

    Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C

    2012-01-27

    Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.

  19. Immunodeficiency and laser magnetic therapy in urology

    NASA Astrophysics Data System (ADS)

    Maati, Moufagued; Rozanov, Vladimir V.; Avdoshin, V. P.

    1996-11-01

    The importance of immunodeficiency problem has increased last time not only due to AIDS appearance, but also to a great extent as a result of the development and active practical use of the methods of immunology parameters investigations. Al great pharmaceutical firms are organizing the process of creating the drugs, influencing on the different phases of immunity, but unfortunately, the problem of their adverse effect and connected complications is till today a milestone. A great number of investigations, proving a good effect of laser-magnetic therapy concerning immune system have been done today. There is, in particular, changing of blood counts and immunologic tests after intravenous laser irradiation of blood. Intravenous laser irradiation of blood results in increasing of lymphocytes, T-immuno stimulation, stabilization of t-lymphocyte subpopulation, increasing of t-lymphocyte helper activity and decreasing of suppressor one.Under this laser action number of circulating immune complexes is decreased, and blood serum bactericide activity and lisozyme number are increased.

  20. Firearms in Frail Hands: An ADL or A Public Health Crisis!

    PubMed

    Patel, Dupal; Syed, Quratulain; Messinger-Rapport, Barbara J; Rader, Erin

    2015-06-01

    The incidence of neurocognitive disorders, which may impair the ability of older adults to perform activities of daily living (ADLs), rises with age. Depressive symptoms are also common in older adults and may affect ADLs. Safe storage and utilization of firearms are complex ADLs, which require intact judgment, executive function, and visuospatial ability, and may be affected by cognitive impairment. Depression or cognitive impairment may cause paranoia, delusions, disinhibition, apathy, or aggression and thereby limit the ability to safely utilize firearms. These problems may be superimposed upon impaired mobility, arthritis, visual impairment, or poor balance. Inadequate attention to personal protection may also cause hearing impairment and accidents. In this article, we review the data on prevalence of firearms access among older adults; safety concerns due to age-related conditions; barriers to addressing this problem; indications prompting screening for firearms access; and resources available to patients, caregivers, and health care providers. © The Author(s) 2014.

  1. Middle Eastern masculinities in the age of new reproductive technologies: male infertility and stigma in Egypt and Lebanon.

    PubMed

    Inhorn, Marcia C

    2004-06-01

    Worldwide, male infertility contributes to more than half of all cases of childlessness; yet, it is a reproductive health problem that is poorly studied and understood. This article examines the problem of male infertility in two Middle Eastern locales, Cairo, Egypt, and Beirut, Lebanon, where men may be at increased risk of male infertility because of environmental and behavioral factors. It is argued that male infertility may be particularly problematic for Middle Eastern men in their pronatalist societies; there, both virility and fertility are typically tied to manhood. Thus, male infertility is a potentially emasculating condition, surrounded by secrecy and stigma. Furthermore, the new reproductive technology called intracytoplasmic sperm injection (ICSI), designed specifically to overcome male infertility, may paradoxically create additional layers of stigma and secrecy, due to the complex moral and marital dilemmas associated with Islamic restrictions on third-party donation of gametes.

  2. Calculation of stress intensity factors in an isotropic multicracked plate: Part 2: Symbolic/numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.

    1992-01-01

    Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.

  3. ElGamal cryptosystem with embedded compression-crypto technique

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Yin, Lee Souk; Hung, Chang Ee; Hussin, Che Haziqah Che

    2014-12-01

    Key distribution problem in symmetric cryptography has been solved by the emergence of asymmetric cryptosystem. Due to its mathematical complexity, computation efficiency becomes a major problem in the real life application of asymmetric cryptosystem. This scenario encourage various researches regarding the enhancement of computation efficiency of asymmetric cryptosystems. ElGamal cryptosystem is one of the most established asymmetric cryptosystem. By using proper parameters, ElGamal cryptosystem is able to provide a good level of information security. On the other hand, Compression-Crypto technique is a technique used to reduce the number of plaintext to be encrypted from k∈ Z+, k > 2 plaintext become only 2 plaintext. Instead of encrypting k plaintext, we only need to encrypt these 2 plaintext. In this paper, we embed the Compression-Crypto technique into the ElGamal cryptosystem. To show that the embedded ElGamal cryptosystem works, we provide proofs on the decryption processes to recover the encrypted plaintext.

  4. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  5. A complex multi-notch astronomical filter to suppress the bright infrared sky.

    PubMed

    Bland-Hawthorn, J; Ellis, S C; Leon-Saval, S G; Haynes, R; Roth, M M; Löhmannsröben, H-G; Horton, A J; Cuby, J-G; Birks, T A; Lawrence, J S; Gillingham, P; Ryder, S D; Trinh, C

    2011-12-06

    A long-standing and profound problem in astronomy is the difficulty in obtaining deep near-infrared observations due to the extreme brightness and variability of the night sky at these wavelengths. A solution to this problem is crucial if we are to obtain the deepest possible observations of the early Universe, as redshifted starlight from distant galaxies appears at these wavelengths. The atmospheric emission between 1,000 and 1,800 nm arises almost entirely from a forest of extremely bright, very narrow hydroxyl emission lines that varies on timescales of minutes. The astronomical community has long envisaged the prospect of selectively removing these lines, while retaining high throughput between them. Here we demonstrate such a filter for the first time, presenting results from the first on-sky tests. Its use on current 8 m telescopes and future 30 m telescopes will open up many new research avenues in the years to come.

  6. Two implementations of the Expert System for the Flight Analysis System (ESFAS) project

    NASA Technical Reports Server (NTRS)

    Wang, Lui

    1988-01-01

    A comparison is made between the two most sophisticated expert system building tools, the Automated Reasoning Tool (ART) and the Knowledge Engineering Environment (KEE). The same problem domain (ESFAS) was used in making the comparison. The Expert System for the Flight Analysis System (ESFAS) acts as an intelligent front end for the Flight Analysis System (FAS). FAS is a complex configuration controlled set of interrelated processors (FORTRAN routines) which will be used by the Mission Planning and Analysis Div. (MPAD) to design and analyze Shuttle and potential Space Station missions. Implementations of ESFAS are described. The two versions represent very different programming paradigms; ART uses rules and KEE uses objects. Due to each of the tools philosophical differences, KEE is implemented using a depth first traversal algorithm, whereas ART uses a user directed traversal method. Either tool could be used to solve this particular problem.

  7. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  8. Data reliability in complex directed networks

    NASA Astrophysics Data System (ADS)

    Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir

    2013-12-01

    The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.

  9. Microwave-mediated magneto-optical trap for polar molecules

    NASA Astrophysics Data System (ADS)

    Dizhou, Xie; Wenhao, Bu; Bo, Yan

    2016-05-01

    Realizing a molecular magneto-optical trap has been a dream for cold molecular physicists for a long time. However, due to the complex energy levels and the small effective Lande g-factor of the excited states, the traditional magneto-optical trap (MOT) scheme does not work very well for polar molecules. One way to overcome this problem is the switching MOT, which requires very fast switching of both the magnetic field and the laser polarizations. Switching laser polarizations is relatively easy, but fast switching of the magnetic field is experimentally challenging. Here we propose an alternative approach, the microwave-mediated MOT, which requires a slight change of the current experimental setup to solve the problem. We calculate the MOT force and compare it with the traditional MOT and the switching MOT scheme. The results show that we can operate a good MOT with this simple setup. Project supported by the Fundamental Research Funds for the Central Universities of China.

  10. Correlation of predicted and measured thermal stresses on an advanced aircraft structure with dissimilar materials. [hypersonic heating simulation

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.

    1979-01-01

    Additional information was added to a growing data base from which estimates of finite element model complexities can be made with respect to thermal stress analysis. The manner in which temperatures were smeared to the finite element grid points was examined from the point of view of the impact on thermal stress calculations. The general comparison of calculated and measured thermal stresses is guite good and there is little doubt that the finite element approach provided by NASTRAN results in correct thermal stress calculations. Discrepancies did exist between measured and calculated values in the skin and the skin/frame junctures. The problems with predicting skin thermal stress were attributed to inadequate temperature inputs to the structural model rather than modeling insufficiencies. The discrepancies occurring at the skin/frame juncture were most likely due to insufficient modeling elements rather than temperature problems.

  11. The Gaseous Explosive Reaction : A Study of the Kinetics of Composite Fuels

    NASA Technical Reports Server (NTRS)

    Stevens, F W

    1929-01-01

    This report deals with the results of a series of studies of the kinetics of gaseous explosive reactions where the fuel under observation, instead of being a simple gas, is a known mixture of simple gases. In the practical application of the gaseous explosive reaction as a source of power in the gas engine, the fuels employed are composite, with characteristics that are apt to be due to the characteristics of their components and hence may be somewhat complex. The simplest problem that could be proposed in an investigation either of the thermodynamics or kinetics of the gaseous explosive reaction of a composite fuel would seem to be a separate study of the reaction characteristics of each component of the fuel and then a study of the reaction characteristics of the various known mixtures of those components forming composite fuels more and more complex. (author)

  12. Accidental versus operational oil spills from shipping in the Baltic Sea: risk governance and management strategies.

    PubMed

    Hassler, Björn

    2011-03-01

    Marine governance of oil transportation is complex. Due to difficulties in effectively monitoring procedures on vessels en voyage, incentives to save costs by not following established regulations on issues such as cleaning of tanks, crew size, and safe navigation may be substantial. The issue of problem structure is placed in focus, that is, to what degree the specific characteristics and complexity of intentional versus accidental oil spill risks affect institutional responses. It is shown that whereas the risk of accidental oil spills primarily has been met by technical requirements on the vessels in combination with Port State control, attempts have been made to curb intentional pollution by for example increased surveillance and smart governance mechanisms such as the No-Special-Fee system. It is suggested that environmental safety could be improved by increased use of smart governance mechanisms tightly adapted to key actors' incentives to alter behavior in preferable directions.

  13. Reference Materials: Significance, General Requirements, and Demand.

    PubMed

    Kiełbasa, Anna; Gadzała-Kopciuch, Renata; Buszewski, Bogusław

    2016-05-03

    Reference materials play an important part in the quality control of measurements. Rapid development of such new scientific disciplines as proteomics, metabolomics, and genomics also necessitates development of new reference materials. This is a great challenge due to the complexity of the production of new reference materials and difficulties associated with achieving their homogeneity and stability. CRMs of tissue are of particular importance. They can be counted among the matrices that are most complex and time consuming in preparation. Tissue is the place of transformation and accumulation of many substances (e.g., metabolites, which are intermediate or end products resulting from metabolic processes). Trace amounts of many substances in tissues must be determined with adequate precision and accuracy. To meet the needs stemming from research and from problems and challenges faced by chemists, analysts, and toxicologists, the number of certified reference materials should be continuously increased.

  14. Test and Evaluation for Enhanced Security: A Quantitative Method to Incorporate Expert Knowledge into Test Planning Decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzo, Davinia; Blackburn, Mark

    Complex systems are comprised of technical, social, political and environmental factors as well as the programmatic factors of cost, schedule and risk. Testing these systems for enhanced security requires expert knowledge in many different fields. It is important to test these systems to ensure effectiveness, but testing is limited to due cost, schedule, safety, feasibility and a myriad of other reasons. Without an effective decision framework for Test and Evaluation (T&E) planning that can take into consideration technical as well as programmatic factors and leverage expert knowledge, security in complex systems may not be assessed effectively. Therefore, this paper coversmore » the identification of the current T&E planning problem and an approach to include the full variety of factors and leverage expert knowledge in T&E planning through the use of Bayesian Networks (BN).« less

  15. Magnesium degradation as determined by artificial neural networks.

    PubMed

    Willumeit, Regine; Feyerabend, Frank; Huber, Norbert

    2013-11-01

    Magnesium degradation under physiological conditions is a highly complex process in which temperature, the use of cell culture growth medium and the presence of CO2, O2 and proteins can influence the corrosion rate and the composition of the resulting corrosion layer. Due to the complexity of this process it is almost impossible to predict the parameters that are most important and whether some parameters have a synergistic effect on the corrosion rate. Artificial neural networks are a mathematical tool that can be used to approximate and analyse non-linear problems with multiple inputs. In this work we present the first analysis of corrosion data obtained using this method, which reveals that CO2 and the composition of the buffer system play a crucial role in the corrosion of magnesium, whereas O2, proteins and temperature play a less prominent role. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. Pricing strategy in a dual-channel and remanufacturing supply chain system

    NASA Astrophysics Data System (ADS)

    Jiang, Chengzhi; Xu, Feng; Sheng, Zhaohan

    2010-07-01

    This article addresses the pricing strategy problems in a supply chain system where the manufacturer sells original products and remanufactured products via indirect retailer channels and direct Internet channels. Due to the complexity of that system, agent technologies that provide a new way for analysing complex systems are used for modelling. Meanwhile, in order to reduce the computational load of searching procedure for optimal prices and profits, a learning search algorithm is designed and implemented within the multi-agent supply chain model. The simulation results show that the proposed model can find out optimal prices of original products and remanufactured products in both channels, which lead to optimal profits of the manufacturer and the retailer. It is also found that the optimal profits are increased by introducing direct channel and remanufacturing. Furthermore, the effect of customer preference, direct channel cost and remanufactured unit cost on optimal prices and profits are examined.

  17. Rapid Inspection of Aerospace Structures - Is It Autonomous Yet?

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Backes, Paul; Joffe, Benjamin

    1996-01-01

    The trend to increase the usage of aging aircraft added a great deal of urgency to the ongoing need for low-cost, rapid, simple-to-operate, reliable and efficient NDE methods for detection and characterization of flaws in aircraft structures. In many cases, the problem of inspection is complex due to the limitation of current technology and the need to disassemble aircraft structures and testing them in lab conditions. To overcome these limitations, reliable field inspection tools are being developed for rapid NDE of large and complex-shape structures, that can operate at harsh, hostal and remote conditions with minimum human interface. In recent years, to address the need for rapid inspection in field conditions, numerous portable scanners were developed using NDE methods, including ultrasonics, shearography, thermography. This paper is written with emphasis on ultrasonic NDE scanners, their evolution and the expected direction of growth.

  18. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  19. Analytic network process model for sustainable lean and green manufacturing performance indicator

    NASA Astrophysics Data System (ADS)

    Aminuddin, Adam Shariff Adli; Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik

    2014-09-01

    Sustainable manufacturing is regarded as the most complex manufacturing paradigm to date as it holds the widest scope of requirements. In addition, its three major pillars of economic, environment and society though distinct, have some overlapping among each of its elements. Even though the concept of sustainability is not new, the development of the performance indicator still needs a lot of improvement due to its multifaceted nature, which requires integrated approach to solve the problem. This paper proposed the best combination of criteria en route a robust sustainable manufacturing performance indicator formation via Analytic Network Process (ANP). The integrated lean, green and sustainable ANP model can be used to comprehend the complex decision system of the sustainability assessment. The finding shows that green manufacturing is more sustainable than lean manufacturing. It also illustrates that procurement practice is the most important criteria in the sustainable manufacturing performance indicator.

  20. Tracked robot controllers for climbing obstacles autonomously

    NASA Astrophysics Data System (ADS)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  1. Prader-Willi Syndrome: Clinical Aspects

    PubMed Central

    Elena, Grechi; Bruna, Cammarata; Benedetta, Mariani; Stefania, Di Candia; Giuseppe, Chiumello

    2012-01-01

    Prader-Willi Syndrome (PWS) is a complex multisystem genetic disorder that shows great variability, with changing clinical features during a patient's life. The syndrome is due to the loss of expression of several genes encoded on the proximal long arm of chromosome 15 (15q11.2–q13). The complex phenotype is most probably caused by a hypothalamic dysfunction that is responsible for hormonal dysfunctions and for absence of the sense of satiety. For this reason a Prader-Willi (PW) child develops hyperphagia during the initial stage of infancy that can lead to obesity and its complications. During infancy many PW child display a range of behavioural problems that become more noticeable in adolescence and adulthood and interfere mostly with quality of life. Early diagnosis of PWS is important for effective long-term management, and a precocious multidisciplinary approach is fundamental to improve quality of life, prevent complications, and prolong life expectancy. PMID:23133744

  2. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  3. Optimal lunar soft landing trajectories using taboo evolutionary programming

    NASA Astrophysics Data System (ADS)

    Mutyalarao, M.; Raj, M. Xavier James

    A safe lunar landing is a key factor to undertake an effective lunar exploration. Lunar lander consists of four phases such as launch phase, the earth-moon transfer phase, circumlunar phase and landing phase. The landing phase can be either hard landing or soft landing. Hard landing means the vehicle lands under the influence of gravity without any deceleration measures. However, soft landing reduces the vertical velocity of the vehicle before landing. Therefore, for the safety of the astronauts as well as the vehicle lunar soft landing with an acceptable velocity is very much essential. So it is important to design the optimal lunar soft landing trajectory with minimum fuel consumption. Optimization of Lunar Soft landing is a complex optimal control problem. In this paper, an analysis related to lunar soft landing from a parking orbit around Moon has been carried out. A two-dimensional trajectory optimization problem is attempted. The problem is complex due to the presence of system constraints. To solve the time-history of control parameters, the problem is converted into two point boundary value problem by using the maximum principle of Pontrygen. Taboo Evolutionary Programming (TEP) technique is a stochastic method developed in recent years and successfully implemented in several fields of research. It combines the features of taboo search and single-point mutation evolutionary programming. Identifying the best unknown parameters of the problem under consideration is the central idea for many space trajectory optimization problems. The TEP technique is used in the present methodology for the best estimation of initial unknown parameters by minimizing objective function interms of fuel requirements. The optimal estimation subsequently results into an optimal trajectory design of a module for soft landing on the Moon from a lunar parking orbit. Numerical simulations demonstrate that the proposed approach is highly efficient and it reduces the minimum fuel consumption. The results are compared with the available results in literature shows that the solution of present algorithm is better than some of the existing algorithms. Keywords: soft landing, trajectory optimization, evolutionary programming, control parameters, Pontrygen principle.

  4. High field hyperpolarization-EXSY experiment for fast determination of dissociation rates in SABRE complexes.

    PubMed

    Hermkens, Niels K J; Feiters, Martin C; Rutjes, Floris P J T; Wijmenga, Sybren S; Tessari, Marco

    2017-03-01

    SABRE (Signal Amplification By Reversible Exchange) is a nuclear spin hyperpolarization technique based on the reversible concurrent binding of small molecules and para-hydrogen (p-H 2 ) to an iridium metal complex in solution. At low magnetic field, spontaneous conversion of p-H 2 spin order to enhanced longitudinal magnetization of the nuclear spins of the other ligands occurs. Subsequent complex dissociation results in hyperpolarized substrate molecules in solution. The lifetime of this complex plays a crucial role in attained SABRE NMR signal enhancements. Depending on the ligands, vastly different dissociation rates have been previously measured using EXSY or selective inversion experiments. However, both these approaches are generally time-consuming due to the long recycle delays (up to 2min) necessary to reach thermal equilibrium for the nuclear spins of interest. In the cases of dilute solutions, signal averaging aggravates the problem, further extending the experimental time. Here, a new approach is proposed based on coherent hyperpolarization transfer to substrate protons in asymmetric complexes at high magnetic field. We have previously shown that such asymmetric complexes are important for application of SABRE to dilute substrates. Our results demonstrate that a series of high sensitivity EXSY spectra can be collected in a short experimental time thanks to the NMR signal enhancement and much shorter recycle delay. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Exploiting node mobility for energy optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    El-Moukaddem, Fatme Mohammad

    Wireless Sensor Networks (WSNs) have become increasingly available for data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit the sheer amount of data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies such as batteries or small solar panels. The availability of numerous low-cost robotic units (e.g. Robomote and Khepera) has made it possible to construct sensor networks consisting of mobile sensor nodes. It has been shown that the controlled mobility offered by mobile sensors can be exploited to improve the energy efficiency of a network. In this thesis, we propose schemes that use mobile sensor nodes to reduce the energy consumption of data-intensive WSNs. Our approaches differ from previous work in two main aspects. First, our approaches do not require complex motion planning of mobile nodes, and hence can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless communications into a holistic optimization framework. We consider three problems arising from the limited energy in the sensor nodes. In the first problem, the network consists of mostly static nodes and contains only a few mobile nodes. In the second and third problems, we assume essentially that all nodes in the WSN are mobile. We first study a new problem called max-data mobile relay configuration (MMRC ) that finds the positions of a set of mobile sensors, referred to as relays, that maximize the total amount of data gathered by the network during its lifetime. We show that the MMRC problem is surprisingly complex even for a trivial network topology due to the joint consideration of the energy consumption of both wireless communication and mechanical locomotion. We present optimal MMRC algorithms and practical distributed implementations for several important network topologies and applications. Second, we consider the problem of minimizing the total energy consumption of a network. We design an iterative algorithm that improves a given configuration by relocating nodes to new positions. We show that this algorithm converges to the optimal configuration for the given transmission routes. Moreover, we propose an efficient distributed implementation that does not require explicit synchronization. Finally, we consider the problem of maximizing the lifetime of the network. We propose an approach that exploits the mobility of the nodes to balance the energy consumption throughout the network. We develop efficient algorithms for single and multiple round approaches. For all three problems, we evaluate the efficiency of our algorithms through simulations. Our simulation results based on realistic energy models obtained from existing mobile and static sensor platforms show that our approaches significantly improve the network's performance and outperform existing approaches.

  6. A Model for Developing Improvements to Critical Thinking Skills across the Community College Curriculum

    ERIC Educational Resources Information Center

    McGarrity, DeShawn N.

    2013-01-01

    Society is faced with more complex problems than in the past because of rapid advancements in technology. These complex problems require multi-dimensional problem-solving abilities that are consistent with higher-order thinking skills. Bok (2006) posits that over 90% of U.S. faculty members consider critical thinking skills as essential for…

  7. A Real-Life Case Study of Audit Interactions--Resolving Messy, Complex Problems

    ERIC Educational Resources Information Center

    Beattie, Vivien; Fearnley, Stella; Hines, Tony

    2012-01-01

    Real-life accounting and auditing problems are often complex and messy, requiring the synthesis of technical knowledge in addition to the application of generic skills. To help students acquire the necessary skills to deal with these problems effectively, educators have called for the use of case-based methods. Cases based on real situations (such…

  8. Student Learning of Complex Earth Systems: A Model to Guide Development of Student Expertise in Problem-Solving

    ERIC Educational Resources Information Center

    Holder, Lauren N.; Scherer, Hannah H.; Herbert, Bruce E.

    2017-01-01

    Engaging students in problem-solving concerning environmental issues in near-surface complex Earth systems involves developing student conceptualization of the Earth as a system and applying that scientific knowledge to the problems using practices that model those used by professionals. In this article, we review geoscience education research…

  9. Utilizing a Sense of Community Theory in Order to Optimize Interagency Response to Complex Contingencies

    DTIC Science & Technology

    2010-06-01

    of Not at all Somewhat Mostly Completely membership such as clothes , signs, art, architecture, logos , landmarks, and flags that people can...on a ?whole of nation? approach to solving complex problems. Psychological sense of community (PSOC) theory provides the link that explains how an...States during complex contingency operations depends on a “whole of nation” approach to solving complex problems. Psychological sense of community

  10. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  11. Anodal Transcranial Direct Current Stimulation of the Prefrontal Cortex Enhances Complex Verbal Associative Thought

    ERIC Educational Resources Information Center

    Cerruti, Carlo; Schlaug, Gottfried

    2009-01-01

    The remote associates test (RAT) is a complex verbal task with associations to both creative thought and general intelligence. RAT problems require not only lateral associations and the internal production of many words but a convergent focus on a single answer. Complex problem-solving of this sort may thus require both substantial verbal…

  12. Less Drinking, Yet More Problems: Understanding African American Drinking and Related Problems

    PubMed Central

    Zapolski, Tamika C. B.; Pedersen, Sarah L.; McCarthy, Denis M.; Smith, Gregory T.

    2013-01-01

    Researchers have found that, compared to European Americans, African Americans report later initiation of drinking, lower rates of use, and lower levels of use across almost all age groups. Nevertheless, African Americans also have higher levels of alcohol problems than European Americans. After reviewing current data regarding these trends, we provide a theory to understand this apparent paradox as well as to understand variability in risk among African Americans. Certain factors appear to operate as both protective factors against heavy use and risk factors for negative consequences from use. For example, African American culture is characterized by norms against heavy alcohol use or intoxication, which protects against heavy use but which also provides within group social disapproval when use does occur. African Americans are more likely to encounter legal problems from drinking than European Americans, even at the same levels of consumption, perhaps thus resulting in reduced consumption but more problems from consumption. There appears to be one particular group of African Americans, low-income African American men, who are at the highest risk for alcoholism and related problems. We theorize that this effect is due to the complex interaction of residential discrimination, racism, age of drinking, and lack of available standard life reinforcers (e.g., stable employment and financial stability). Further empirical research will be needed to test our theories and otherwise move this important field forward. A focus on within group variation in drinking patterns and problems is necessary. We suggest several new avenues of inquiry. PMID:23477449

  13. Behavior problems and prevalence of asthma symptoms among Brazilian children.

    PubMed

    Feitosa, Caroline A; Santos, Darci N; Barreto do Carmo, Maria B; Santos, Letícia M; Teles, Carlos A S; Rodrigues, Laura C; Barreto, Mauricio L

    2011-09-01

    Asthma is the most common chronic disease in childhood and has been designated a public health problem due to the increase in its prevalence in recent decades, the amount of health service expenditure it absorbs and an absence of consensus about its etiology. The relationships among psychosocial factors and the occurrence, symptomatology, and severity of asthma have recently been considered. There is still controversy about the association between asthma and a child's mental health, since the pathways through which this relationship is established are complex and not well researched. This study aims to investigate whether behavior problems are associated with the prevalence of asthma symptoms in a large urban center in Latin America. It is a cross-section study of 869 children between 6 and 12 years old, residents of Salvador, Brazil. The International Study of Allergy and Asthma in Childhood (ISAAC) instrument was used to evaluate prevalence of asthma symptoms. The Child Behavior Checklist (CBCL) was employed to evaluate behavioral problems. 19.26% (n=212) of the children presented symptoms of asthma. 35% were classified as having clinical behavioral problems. Poisson's robust regression model demonstrated a statistically significant association between the presence of behavioral problems and asthma symptoms occurrence (PR: 1.43; 95% CI: 1.10-1.85). These results suggest an association between behavioral problems and pediatric asthma, and support the inclusion of mental health care in the provision of services for asthma morbidity. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  15. A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks

    NASA Astrophysics Data System (ADS)

    Gong, Jie; Zhou, Sheng; Niu, Zhisheng

    The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.

  16. The distribution of the zeros of the Hermite-Padé polynomials for a pair of functions forming a Nikishin system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhmanov, E A; Suetin, S P

    2013-09-30

    The distribution of the zeros of the Hermite-Padé polynomials of the first kind for a pair of functions with an arbitrary even number of common branch points lying on the real axis is investigated under the assumption that this pair of functions forms a generalized complex Nikishin system. It is proved (Theorem 1) that the zeros have a limiting distribution, which coincides with the equilibrium measure of a certain compact set having the S-property in a harmonic external field. The existence problem for S-compact sets is solved in Theorem 2. The main idea of the proof of Theorem 1 consists in replacing a vector equilibrium problem in potentialmore » theory by a scalar problem with an external field and then using the general Gonchar-Rakhmanov method, which was worked out in the solution of the '1/9'-conjecture. The relation of the result obtained here to some results and conjectures due to Nuttall is discussed. Bibliography: 51 titles.« less

  17. Working Memory Capacity and Fluid Intelligence: Maintenance and Disengagement.

    PubMed

    Shipstead, Zach; Harrison, Tyler L; Engle, Randall W

    2016-11-01

    Working memory capacity and fluid intelligence have been demonstrated to be strongly correlated traits. Typically, high working memory capacity is believed to facilitate reasoning through accurate maintenance of relevant information. In this article, we present a proposal reframing this issue, such that tests of working memory capacity and fluid intelligence are seen as measuring complementary processes that facilitate complex cognition. Respectively, these are the ability to maintain access to critical information and the ability to disengage from or block outdated information. In the realm of problem solving, high working memory capacity allows a person to represent and maintain a problem accurately and stably, so that hypothesis testing can be conducted. However, as hypotheses are disproven or become untenable, disengaging from outdated problem solving attempts becomes important so that new hypotheses can be generated and tested. From this perspective, the strong correlation between working memory capacity and fluid intelligence is due not to one ability having a causal influence on the other but to separate attention-demanding mental functions that can be contrary to one another but are organized around top-down processing goals. © The Author(s) 2016.

  18. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  19. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  20. Generalist solutions to complex problems: generating practice-based evidence - the example of managing multi-morbidity

    PubMed Central

    2013-01-01

    Background A growing proportion of people are living with long term conditions. The majority have more than one. Dealing with multi-morbidity is a complex problem for health systems: for those designing and implementing healthcare as well as for those providing the evidence informing practice. Yet the concept of multi-morbidity (the presence of >2 diseases) is a product of the design of health care systems which define health care need on the basis of disease status. So does the solution lie in an alternative model of healthcare? Discussion Strengthening generalist practice has been proposed as part of the solution to tackling multi-morbidity. Generalism is a professional philosophy of practice, deeply known to many practitioners, and described as expertise in whole person medicine. But generalism lacks the evidence base needed by policy makers and planners to support service redesign. The challenge is to fill this practice-research gap in order to critically explore if and when generalist care offers a robust alternative to management of this complex problem. We need practice-based evidence to fill this gap. By recognising generalist practice as a ‘complex intervention’ (intervening in a complex system), we outline an approach to evaluate impact using action-research principles. We highlight the implications for those who both commission and undertake research in order to tackle this problem. Summary Answers to the complex problem of multi-morbidity won’t come from doing more of the same. We need to change systems of care, and so the systems for generating evidence to support that care. This paper contributes to that work through outlining a process for generating practice-based evidence of generalist solutions to the complex problem of person-centred care for people with multi-morbidity. PMID:23919296

Top