Sample records for constrained multiple-objective reliability

  1. Improving the Performance of Highly Constrained Water Resource Systems using Multiobjective Evolutionary Algorithms and RiverWare

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2015-12-01

    Instead of building new infrastructure to increase their supply reliability, water resource managers are often tasked with better management of current systems. The managers often have existing simulation models that aid their planning, and lack methods for efficiently generating and evaluating planning alternatives. This presentation discusses how multiobjective evolutionary algorithm (MOEA) decision support can be used with the sophisticated water infrastructure model, RiverWare, in highly constrained water planning environments. We first discuss a study that performed a many-objective tradeoff analysis of water supply in the Tarrant Regional Water District (TRWD) in Texas. RiverWare is combined with the Borg MOEA to solve a seven objective problem that includes systemwide performance objectives and individual reservoir storage reliability. Decisions within the formulation balance supply in multiple reservoirs and control pumping between the eastern and western parts of the system. The RiverWare simulation model is forced by two stochastic hydrology scenarios to inform how management changes in wet versus dry conditions. The second part of the presentation suggests how a broader set of RiverWare-MOEA studies can inform tradeoffs in other systems, especially in political situations where multiple actors are in conflict over finite water resources. By incorporating quantitative representations of diverse parties' objectives during the search for solutions, MOEAs may provide support for negotiations and lead to more widely beneficial water management outcomes.

  2. Multiple utility constrained multi-objective programs using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed

    2018-03-01

    A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.

  3. Flexible Unicast-Based Group Communication for CoAP-Enabled Devices †

    PubMed Central

    Ishaq, Isam; Hoebeke, Jeroen; Van den Abeele, Floris; Rossey, Jen; Moerman, Ingrid; Demeester, Piet

    2014-01-01

    Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead. PMID:24901978

  4. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  5. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    NASA Astrophysics Data System (ADS)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  6. Developing Inventory and Monitoring Programs Based on Multiple Objectives

    Treesearch

    Daniel L. Schmoldt; David L. Peterson; David G. Silsbee

    1995-01-01

    Resource inventory and monitoring (I&M) programs in national parks combine multiple objectives in order to create a plan of action over a finite time horizon. Because all program activities are constrained by time and money, it is critical to plan I&M activities that make the best use of available agency resources. However, multiple objectives complicate a...

  7. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  8. Application of the Constrained Admissible Region Multiple Hypothesis Filter to Initial Orbit Determination of a Break-up

    NASA Astrophysics Data System (ADS)

    Kelecy, Tom; Shoemaker, Michael; Jah, Moriba

    2013-08-01

    A break-up in Low Earth Orbit (LEO) is simulated for 10 objects having area-to-mass ratios (AMR's) ranging from 0.1-10.0 m2/kg. The Constrained Admissible Region Multiple Hypothesis Filter (CAR-MHF) is applied to determining and characterizing the orbit and atmospheric drag parameters (CdA/m) simultaneously for each of the 10 objects with no a priori orbit or drag information. The results indicate that CAR-MHF shows promise for accurate, unambiguous and autonomous determination of the orbit and drag states.

  9. Launch and Assembly Reliability Analysis for Human Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant; Gelito, Justin; Stromgren, Chel; Cirillo, William; Goodliff, Kandyce

    2012-01-01

    NASA's future human space exploration strategy includes single and multi-launch missions to various destinations including cis-lunar space, near Earth objects such as asteroids, and ultimately Mars. Each campaign is being defined by Design Reference Missions (DRMs). Many of these missions are complex, requiring multiple launches and assembly of vehicles in orbit. Certain missions also have constrained departure windows to the destination. These factors raise concerns regarding the reliability of launching and assembling all required elements in time to support planned departure. This paper describes an integrated methodology for analyzing launch and assembly reliability in any single DRM or set of DRMs starting with flight hardware manufacturing and ending with final departure to the destination. A discrete event simulation is built for each DRM that includes the pertinent risk factors including, but not limited to: manufacturing completion; ground transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to trans-destination-injection. Each reliability factor can be selectively activated or deactivated so that the most critical risk factors can be identified. This enables NASA to prioritize mitigation actions so as to improve mission success.

  10. Multi-Objective vs. Single Objective Calibration of a Hydrologic Model using Either Different Hydrologic Signatures or Complementary Data Sources

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.

    2015-12-01

    Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.

  11. Developing and Evaluating a Machine-Scorable, Constrained Constructed-Response Item.

    ERIC Educational Resources Information Center

    Braun, Henry I.; And Others

    The use of constructed response items in large scale standardized testing has been hampered by the costs and difficulties associated with obtaining reliable scores. The advent of expert systems may signal the eventual removal of this impediment. This study investigated the accuracy with which expert systems could score a new, non-multiple choice…

  12. Thermo-Mechanical Modeling and Analysis for Turbopump Assemblies

    NASA Technical Reports Server (NTRS)

    Platt, Mike; Marsh, Matt

    2003-01-01

    Life, reliability, and cost are strongly impacted by steady and transient thermo-mechanical effect. Design cycle can suffer big setbacks when working a transient stress/deflection issue. Balance between objectives and constrains is always difficult. Requires assembly-level analysis early in the design cycle.

  13. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    NASA Astrophysics Data System (ADS)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.

  14. Chapter 6: The scientific basis for conserving forest carnivores: considerations for management

    Treesearch

    L. Jack Lyon; Keith B. Aubry; William J. Zielinski; Steven W. Buskirk; Leonard F. Ruggiero

    1994-01-01

    The reviews presented in previous chapters reveal substantial gaps in our knowledge about marten, fisher, lynx, and wolverine. These gaps severely constrain our ability to design reliable conservation strategies. This problem will be explored in depth in Chapter 7. In this chapter, our objective is to discuss management considerations resulting from what we currently...

  15. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  16. WE-AB-209-12: Quasi Constrained Multi-Criteria Optimization for Automated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, W.T.; Siebers, J.V.

    Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less

  17. Mid-level perceptual features distinguish objects of different real-world sizes.

    PubMed

    Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A

    2016-01-01

    Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).

  18. Using the Analytic Hierarchy Process for Decision-Making in Ecosystem Management

    Treesearch

    Daniel L. Schmoldt; David L. Peterson

    1997-01-01

    Land management activities on public lands combine multiple objectives in order to create a plan of action over a finite time horizon. Because management activities are constrained by time and money, it is critical to make the best use of available agency resources. The Analytic Hierarchy Process (AHP) offers a structure for multi-objective decisionmaking so that...

  19. A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP

    PubMed Central

    Chun, Seung-Man; Park, Jong-Tae

    2017-01-01

    Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation. PMID:28085109

  20. A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP.

    PubMed

    Chun, Seung-Man; Park, Jong-Tae

    2017-01-12

    Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.

  1. Methods for constraining fine structure constant evolution with OH microwave transitions.

    PubMed

    Darling, Jeremy

    2003-07-04

    We investigate the constraints that OH microwave transitions in megamasers and molecular absorbers at cosmological distances may place on the evolution of the fine structure constant alpha=e(2)/ variant Planck's over 2pi c. The centimeter OH transitions are a combination of hyperfine splitting and lambda doubling that can constrain the cosmic evolution of alpha from a single species, avoiding systematic errors in alpha measurements from multiple species which may have relative velocity offsets. The most promising method compares the 18 and 6 cm OH lines, includes a calibration of systematic errors, and offers multiple determinations of alpha in a single object. Comparisons of OH lines to the HI 21 cm line and CO rotational transitions also show promise.

  2. A parametric LQ approach to multiobjective control system design

    NASA Technical Reports Server (NTRS)

    Kyr, Douglas E.; Buchner, Marc

    1988-01-01

    The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.

  3. Multiple objective optimization in reliability demonstration test

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang

    2016-10-01

    Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less

  4. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  5. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  6. Feasibility, Test-Retest Reliability, and Interrater Reliability of the Modified Ashworth Scale and Modified Tardieu Scale in Persons with Profound Intellectual and Multiple Disabilities

    ERIC Educational Resources Information Center

    Waninge, A.; Rook, R. A.; Dijkhuizen, A.; Gielen, E.; van der Schans, C. P.

    2011-01-01

    Caregivers of persons with profound intellectual and multiple disabilities (PIMD) often describe the quality of the daily movements of these persons in terms of flexibility or stiffness. Objective outcome measures for flexibility and stiffness are muscle tone or level of spasticity. Two instruments used to grade muscle tone and spasticity are the…

  7. MQ-MAC: A Multi-Constrained QoS-Aware Duty Cycle MAC for Heterogeneous Traffic in Wireless Sensor Networks

    PubMed Central

    Monowar, Muhammad Mostafa; Rahman, Md. Obaidur; Hong, Choong Seon; Lee, Sungwon

    2010-01-01

    Energy conservation is one of the striking research issues now-a-days for power constrained wireless sensor networks (WSNs) and hence, several duty-cycle based MAC protocols have been devised for WSNs in the last few years. However, assimilation of diverse applications with different QoS requirements (i.e., delay and reliability) within the same network also necessitates in devising a generic duty-cycle based MAC protocol that can achieve both the delay and reliability guarantee, termed as multi-constrained QoS, while preserving the energy efficiency. To address this, in this paper, we propose a Multi-constrained QoS-aware duty-cycle MAC for heterogeneous traffic in WSNs (MQ-MAC). MQ-MAC classifies the traffic based on their multi-constrained QoS demands. Through extensive simulation using ns-2 we evaluate the performance of MQ-MAC. MQ-MAC provides the desired delay and reliability guarantee according to the nature of the traffic classes as well as achieves energy efficiency. PMID:22163439

  8. Construction of Valid and Reliable Test for Assessment of Students

    ERIC Educational Resources Information Center

    Osadebe, P. U.

    2015-01-01

    The study was carried out to construct a valid and reliable test in Economics for secondary school students. Two research questions were drawn to guide the establishment of validity and reliability for the Economics Achievement Test (EAT). It is a multiple choice objective test of five options with 100 items. A sample of 1000 students was randomly…

  9. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  10. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  11. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    PubMed

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  12. Action anticipation in human infants reveals assumptions about anteroposterior body-structure and action.

    PubMed

    Hernik, Mikolaj; Fearon, Pasco; Csibra, Gergely

    2014-04-22

    Animal actions are almost universally constrained by the bilateral body-plan. For example, the direction of travel tends to be constrained by the orientation of the animal's anteroposterior axis. Hence, an animal's behaviour can reliably guide the identification of its front and back, and its orientation can reliably guide action prediction. We examine the hypothesis that the evolutionarily ancient relation between anteroposterior body-structure and behaviour guides our cognitive processing of agents and their actions. In a series of studies, we demonstrate that, after limited exposure, human infants as young as six months of age spontaneously encode a novel agent as having a certain axial direction with respect to its actions and rely on it when anticipating the agent's further behaviour. We found that such encoding is restricted to objects exhibiting cues of agency and does not depend on generalization from features of familiar animals. Our research offers a new tool for investigating the perception of animate agency and supports the proposal that the underlying cognitive mechanisms have been shaped by basic biological adaptations in humans.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.

    The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’smore » t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.« less

  14. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  15. A multi-disciplinary approach to fire management strategy, suppression costs, community interaction, and organizational performance

    Treesearch

    Anne E. Black; Krista Gebert; Sarah McCaffrey; Toddi Steelman; Janie Canton-Thompson

    2009-01-01

    Wildland fire management must balance the multiple objectives of protecting life, property, and resources; reducing hazardous fuels; and restoring ecosystems. These Federal policy imperatives, varied yet connected, must be met under an increasingly constrained budget. A key to management success is effectively exercising the full range of management flexibility in...

  16. Discovery of wide low and very low-mass binary systems using Virtual Observatory tools

    NASA Astrophysics Data System (ADS)

    Gálvez-Ortiz, M. C.; Solano, E.; Lodieu, N.; Aberasturi, M.

    2017-04-01

    The frequency of multiple systems and their properties are key constraints of stellar formation and evolution. Formation mechanisms of very low-mass (VLM) objects are still under considerable debate, and an accurate assessment of their multiplicity and orbital properties is essential for constraining current theoretical models. Taking advantage of the virtual observatory capabilities, we looked for comoving low and VLM binary (or multiple) systems using the Large Area Survey of the UKIDSS LAS DR10, SDSS DR9 and the 2MASS Catalogues. Other catalogues (WISE, GLIMPSE, SuperCosmos, etc.) were used to derive the physical parameters of the systems. We report the identification of 36 low and VLM (˜M0-L0 spectral types) candidates to binary/multiple system (separations between 200 and 92 000 au), whose physical association is confirmed through common proper motion, distance and low probability of chance alignment. This new system list notably increases the previous sampling in their mass-separation parameter space (˜100). We have also found 50 low-mass objects that we can classify as ˜L0-T2 according to their photometric information. Only one of these objects presents a common proper motion high-mass companion. Although we could not constrain the age of the majority of the candidates, probably most of them are still bound except four that may be under disruption processes. We suggest that our sample could be divided in two populations: one tightly bound wide VLM systems that are expected to last more than 10 Gyr, and other formed by weak bound wide VLM systems that will dissipate within a few Gyr.

  17. Many-objective optimization and visual analytics reveal key trade-offs for London's water supply

    NASA Astrophysics Data System (ADS)

    Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.

    2015-12-01

    In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.

  18. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  19. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  20. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  1. Tactile recognition and localization using object models: the case of polyhedra on a plane.

    PubMed

    Gaston, P C; Lozano-Perez, T

    1984-03-01

    This paper discusses how data from multiple tactile sensors may be used to identify and locate one object, from among a set of known objects. We use only local information from sensors: 1) the position of contact points and 2) ranges of surface normals at the contact points. The recognition and localization process is structured as the development and pruning of a tree of consistent hypotheses about pairings between contact points and object surfaces. In this paper, we deal with polyhedral objects constrained to lie on a known plane, i.e., having three degrees of positioning freedom relative to the sensors. We illustrate the performance of the algorithm by simulation.

  2. Quantifying arm nonuse in individuals poststroke.

    PubMed

    Han, Cheol E; Kim, Sujin; Chen, Shuya; Lai, Yi-Hsuan; Lee, Jeong-Yoon; Osu, Rieko; Winstein, Carolee J; Schweighofer, Nicolas

    2013-06-01

    Arm nonuse, defined as the difference between what the individual can do when constrained to use the paretic arm and what the individual does when given a free choice to use either arm, has not yet been quantified in individuals poststroke. (1) To quantify nonuse poststroke and (2) to develop and test a novel, simple, objective, reliable, and valid instrument, the Bilateral Arm Reaching Test (BART), to quantify arm use and nonuse poststroke. First, we quantify nonuse with the Quality of Movement (QOM) subscale of the Actual Amount of Use Test (AAUT) by subtracting the AAUT QOM score in the spontaneous use condition from the AAUT QOM score in a subsequent constrained use condition. Second, we quantify arm use and nonuse with BART by comparing reaching performance to visual targets projected over a 2D horizontal hemi-work space in a spontaneous-use condition (in which participants are free to use either arm at each trial) with reaching performance in a constrained-use condition. All participants (N = 24) with chronic stroke and with mild to moderate impairment exhibited nonuse with the AAUT QOM. Nonuse with BART had excellent test-retest reliability and good external validity. BART is the first instrument that can be used repeatedly and practically in the clinic to quantify the effects of neurorehabilitation on arm use and nonuse and in the laboratory for advancing theoretical knowledge about the recovery of arm use and the development of nonuse and "learned nonuse" after stroke.

  3. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.

  4. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  5. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  6. Shuttle Liquid Fly Back Booster Configuration Options

    NASA Technical Reports Server (NTRS)

    Healy, T. J., Jr.

    1998-01-01

    This paper surveys the basic configuration options available to a Liquid Fly Back Booster (LFBB), integrated with the Space Shuttle system. The background of the development of the LFBB concept is given. The influence of the main booster engine (BME) installations and the Fly Back Engine (FBE) installation on the aerodynamic configurations are also discussed. Limits on the LFBB configuration design space imposed by the existing Shuttle flight and ground elements are also described. The objective of the paper is to put the constrains and design space for an LFBB in perspective. The object of the work is to define LFBB configurations that significantly improve safety, operability, reliability and performance of the Shuttle system and dramatically lower operations costs.

  7. Deployable antenna kinematics using tensegrity structure design

    NASA Astrophysics Data System (ADS)

    Knight, Byron Franklin

    With vast changes in spacecraft development over the last decade, a new, cheaper approach was needed for deployable kinematic systems such as parabolic antenna reflectors. Historically, these mesh-surface reflectors have resembled folded umbrellas, with incremental redesigns utilized to save packaging size. These systems are typically over-constrained designs, the assumption being that high reliability necessary for space operations requires this level of conservatism. But with the rapid commercialization of space, smaller launch platforms and satellite buses have demanded much higher efficiency from all space equipment than can be achieved through this incremental approach. This work applies an approach called tensegrity to deployable antenna development. Kenneth Snelson, a student of R. Buckminster Fuller, invented Tensegrity structures in 1948. Such structures use a minimum number of compression members (struts); stability is maintain using tension members (ties). The novelty introduced in this work is that the ties are elastic, allowing the struts to extend or contract, and in this way changing the surface of the antenna. Previously, the University of Florida developed an approach to quantify the stability and motion of parallel manipulators. This approach was applied to deployable, tensegrity, antenna structures. Based on the kinematic analyses for the 3-3 (octahedron) and 4-4 (square anti-prism) structures, the 6-6 (hexagonal anti-prism) analysis was completed which establishes usable structural parameters. The primary objective for this work was to prove the stability of this class of deployable structures, and their potential application to space structures. The secondary objective is to define special motions for tensegrity antennas, to meet the subsystem design requirements, such as addressing multiple antenna-feed locations. This work combines the historical experiences of the artist (Snelson), the mathematician (Ball), and the space systems engineer (Wertz) to develop a new, practical design approach. This kinematic analysis of tensegrity structures blends these differences to provide the design community with a new approach to lightweight, robust, adaptive structures with the high reliability that space demands. Additionally, by applying Screw Theory, a tensegrity structure antenna can be commanded to move along a screw axis, and therefore meeting the requirement to address multiple feed locations.

  8. Multi-objects recognition for distributed intelligent sensor networks

    NASA Astrophysics Data System (ADS)

    He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.

    2008-04-01

    This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.

  9. Construction of Economics Achievement Test for Assessment of Students

    ERIC Educational Resources Information Center

    Osadebe, P. U.

    2014-01-01

    The study was carried out to construct a valid and reliable test in Economics for secondary school students. Two research questions were drawn to guide the establishment of validity and reliability for the Economics Achievement Test (EAT). It is a multiple choice objective test of five options with 100 items. A sample of 1000 students was randomly…

  10. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  11. Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong

    One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.

  12. Developing objectives with multiple stakeholders: adaptive management of horseshoe crabs and Red Knots in the Delaware Bay

    USGS Publications Warehouse

    McGowan, Conor P.; Lyons, James E.; Smith, David

    2015-01-01

    Structured decision making (SDM) is an increasingly utilized approach and set of tools for addressing complex decisions in environmental management. SDM is a value-focused thinking approach that places paramount importance on first establishing clear management objectives that reflect core values of stakeholders. To be useful for management, objectives must be transparently stated in unambiguous and measurable terms. We used these concepts to develop consensus objectives for the multiple stakeholders of horseshoe crab harvest in Delaware Bay. Participating stakeholders first agreed on a qualitative statement of fundamental objectives, and then worked to convert those objectives to specific and measurable quantities, so that management decisions could be assessed. We used a constraint-based approach where the conservation objectives for Red Knots, a species of migratory shorebird that relies on horseshoe crab eggs as a food resource during migration, constrained the utility of crab harvest. Developing utility functions to effectively reflect the management objectives allowed us to incorporate stakeholder risk aversion even though different stakeholder groups were averse to different or competing risks. While measurable objectives and quantitative utility functions seem scientific, developing these objectives was fundamentally driven by the values of the participating stakeholders.

  13. Developing Objectives with Multiple Stakeholders: Adaptive Management of Horseshoe Crabs and Red Knots in the Delaware Bay

    NASA Astrophysics Data System (ADS)

    McGowan, Conor P.; Lyons, James E.; Smith, David R.

    2015-04-01

    Structured decision making (SDM) is an increasingly utilized approach and set of tools for addressing complex decisions in environmental management. SDM is a value-focused thinking approach that places paramount importance on first establishing clear management objectives that reflect core values of stakeholders. To be useful for management, objectives must be transparently stated in unambiguous and measurable terms. We used these concepts to develop consensus objectives for the multiple stakeholders of horseshoe crab harvest in Delaware Bay. Participating stakeholders first agreed on a qualitative statement of fundamental objectives, and then worked to convert those objectives to specific and measurable quantities, so that management decisions could be assessed. We used a constraint-based approach where the conservation objectives for Red Knots, a species of migratory shorebird that relies on horseshoe crab eggs as a food resource during migration, constrained the utility of crab harvest. Developing utility functions to effectively reflect the management objectives allowed us to incorporate stakeholder risk aversion even though different stakeholder groups were averse to different or competing risks. While measurable objectives and quantitative utility functions seem scientific, developing these objectives was fundamentally driven by the values of the participating stakeholders.

  14. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-03-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.

  15. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-06-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.

  16. McMAC: Towards a MAC Protocol with Multi-Constrained QoS Provisioning for Diverse Traffic in Wireless Body Area Networks

    PubMed Central

    Monowar, Muhammad Mostafa; Hassan, Mohammad Mehedi; Bajaber, Fuad; Al-Hussein, Musaed; Alamri, Atif

    2012-01-01

    The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC, a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the “transmit-whenever-appropriate” principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposed to ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency. PMID:23202224

  17. Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements

    NASA Astrophysics Data System (ADS)

    Grombacher, Denys; Auken, Esben

    2018-05-01

    One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.

  18. Multiple exposure photographic (MEP) technique: an objective assessment of sperm motility in infertility management.

    PubMed

    Adetoro, O O

    1988-06-01

    Multiple exposure photography (MEP), an objective technique, was used in determining the percentage of motile sperms in the semen samples from 41 males being investigated for infertility. This technique was compared with the conventional subjective ordinary microscopy method of spermatozoal motility assessment. A satisfactory correlation was observed in percentage sperm motility assessment using the two methods but the MEP estimation was more consistent and reliable. The value of this technique of sperm motility study in the developing world is discussed.

  19. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method.

    PubMed

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The Role of Search Speed in the Contextual Cueing of Children's Attention.

    PubMed

    Darby, Kevin; Burling, Joseph; Yoshida, Hanako

    2014-01-01

    The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information.

  1. The Role of Search Speed in the Contextual Cueing of Children’s Attention

    PubMed Central

    Darby, Kevin; Burling, Joseph; Yoshida, Hanako

    2013-01-01

    The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information. PMID:24505167

  2. The Advantages of Normalizing Electromyography to Ballistic Rather than Isometric or Isokinetic Tasks.

    PubMed

    Suydam, Stephen M; Manal, Kurt; Buchanan, Thomas S

    2017-07-01

    Isometric tasks have been a standard for electromyography (EMG) normalization stemming from anatomic and physiologic stability observed during contraction. Ballistic dynamic tasks have the benefit of eliciting maximum EMG signals for normalization, despite having the potential for greater signal variability. It is the purpose of this study to compare maximum voluntary isometric contraction (MVIC) to nonisometric tasks with increasing degrees of extrinsic variability, ie, joint range of motion, velocity, rate of contraction, etc., to determine if the ballistic tasks, which elicit larger peak EMG signals, are more reliable than the constrained MVIC. Fifteen subjects performed MVIC, isokinetic, maximum countermovement jump, and sprint tasks while EMG was collected from 9 muscles in the quadriceps, hamstrings, and lower leg. The results revealed the unconstrained ballistic tasks were more reliable compared to the constrained MVIC and isokinetic tasks for all triceps surae muscles. The EMG from sprinting was more reliable than the constrained cases for both the hamstrings and vasti. The most reliable EMG signals occurred when the body was permitted its natural, unconstrained motion. These results suggest that EMG is best normalized using ballistic tasks to provide the greatest within-subject reliability, which beneficially yield maximum EMG values.

  3. Air Force Technical Objective Document, FY89.

    DTIC Science & Technology

    1988-04-01

    threat warning; multimegawatt stand-off jammers; a family of new, broadband , active decoy expendables; E4? subsystems and EW suites for Military...and monolithic integrated circuits. (3) Microwave TWTs Develop microwave tube technology and selected thermionic power sources and amplifiers for ECM...Improved design reliability and multiple application of tube technology are stressed. Improve Traveling Wave Tube ( TWT ) reliability by instrumenting a TWT

  4. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  5. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.

  6. Inventory of Motive of Preference for Conventional Paper-and-Pencil Tests: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Eser, Mehmet Taha; Dogan, Nuri

    2017-01-01

    Purpose: The objective of this study is to develop the Inventory of Motive of Preference for Conventional Paper-And-Pencil Tests and to evaluate students' motives for preferring written tests, short-answer tests, true/false tests or multiple-choice tests. This will add a measurement tool to the literature with valid and reliable results to help…

  7. Environmental constraints shaping constituent order in emerging communication systems: Structural iconicity, interactive alignment and conventionalization.

    PubMed

    Christensen, Peer; Fusaroli, Riccardo; Tylén, Kristian

    2016-01-01

    Where does linguistic structure come from? Recent gesture elicitation studies have indicated that constituent order (corresponding to for instance subject-verb-object, or SVO in English) may be heavily influenced by human cognitive biases constraining gesture production and transmission. Here we explore the alternative hypothesis that syntactic patterns are motivated by multiple environmental and social-interactional constraints that are external to the cognitive domain. In three experiments, we systematically investigate different motivations for structure in the gestural communication of simple transitive events. The first experiment indicates that, if participants communicate about different types of events, manipulation events (e.g. someone throwing a cake) and construction events (e.g. someone baking a cake), they spontaneously and systematically produce different constituent orders, SOV and SVO respectively, thus following the principle of structural iconicity. The second experiment shows that participants' choice of constituent order is also reliably influenced by social-interactional forces of interactive alignment, that is, the tendency to re-use an interlocutor's previous choice of constituent order, thus potentially overriding affordances for iconicity. Lastly, the third experiment finds that the relative frequency distribution of referent event types motivates the stabilization and conventionalization of a single constituent order for the communication of different types of events. Together, our results demonstrate that constituent order in emerging gestural communication systems is shaped and stabilized in response to multiple external environmental and social factors: structural iconicity, interactive alignment and distributional frequency. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The Initial Development of Object Knowledge by a Learning Robot

    PubMed Central

    Modayil, Joseph; Kuipers, Benjamin

    2008-01-01

    We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188

  9. Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.

    2013-01-01

    NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.

  10. The validity of three tests of temperament in guppies (Poecilia reticulata).

    PubMed

    Burns, James G

    2008-11-01

    Differences in temperament (consistent differences among individuals in behavior) can have important effects on fitness-related activities such as dispersal and competition. However, evolutionary ecologists have put limited effort into validating their tests of temperament. This article attempts to validate three standard tests of temperament in guppies: the open-field test, emergence test, and novel-object test. Through multiple reliability trials, and comparison of results between different types of test, this study establishes the confidence that can be placed in these temperament tests. The open-field test is shown to be a good test of boldness and exploratory behavior; the open-field test was reliable when tested in multiple ways. There were problems with the emergence test and novel-object test, which leads one to conclude that the protocols used in this study should not be considered valid tests for this species. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  11. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  12. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  13. Reliability of the Achilles tendon tap reflex evoked during stance using a pendulum hammer.

    PubMed

    Mildren, Robyn L; Zaback, Martin; Adkin, Allan L; Frank, James S; Bent, Leah R

    2016-01-01

    The tendon tap reflex (T-reflex) is often evoked in relaxed muscles to assess spinal reflex circuitry. Factors contributing to reflex excitability are modulated to accommodate specific postural demands. Thus, there is a need to be able to assess this reflex in a state where spinal reflex circuitry is engaged in maintaining posture. The aim of this study was to determine whether a pendulum hammer could provide controlled stimuli to the Achilles tendon and evoke reliable muscle responses during normal stance. A second aim was to establish appropriate stimulus parameters for experimental use. Fifteen healthy young adults stood on a forceplate while taps were applied to the Achilles tendon under conditions in which postural sway was constrained (by providing centre of pressure feedback) or unconstrained (no feedback) from an invariant release angle (50°). Twelve participants repeated this testing approximately six months later. Within one experimental session, tap force and T-reflex amplitude were found to be reliable regardless of whether postural sway was constrained (tap force ICC=0.982; T-reflex ICC=0.979) or unconstrained (tap force ICC=0.968; T-reflex ICC=0.964). T-reflex amplitude was also reliable between experimental sessions (constrained ICC=0.894; unconstrained ICC=0.890). When a T-reflex recruitment curve was constructed, optimal mid-range responses were observed using a 50° release angle. These results demonstrate that reliable Achilles T-reflexes can be evoked in standing participants without the need to constrain posture. The pendulum hammer provides a simple method to allow researchers and clinicians to gather information about reflex circuitry in a state where it is involved in postural control. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. MONSS: A multi-objective nonlinear simplex search approach

    NASA Astrophysics Data System (ADS)

    Zapotecas-Martínez, Saúl; Coello Coello, Carlos A.

    2016-01-01

    This article presents a novel methodology for dealing with continuous box-constrained multi-objective optimization problems (MOPs). The proposed algorithm adopts a nonlinear simplex search scheme in order to obtain multiple elements of the Pareto optimal set. The search is directed by a well-distributed set of weight vectors, each of which defines a scalarization problem that is solved by deforming a simplex according to the movements described by Nelder and Mead's method. Considering an MOP with n decision variables, the simplex is constructed using n+1 solutions which minimize different scalarization problems defined by n+1 neighbor weight vectors. All solutions found in the search are used to update a set of solutions considered to be the minima for each separate problem. In this way, the proposed algorithm collectively obtains multiple trade-offs among the different conflicting objectives, while maintaining a proper representation of the Pareto optimal front. In this article, it is shown that a well-designed strategy using just mathematical programming techniques can be competitive with respect to the state-of-the-art multi-objective evolutionary algorithms against which it was compared.

  15. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  16. Stock management in hospital pharmacy using chance-constrained model predictive control.

    PubMed

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B.

    2003-12-01

    Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.

  18. Developing inventory and monitoring programs based on multiple objectives

    NASA Astrophysics Data System (ADS)

    Schmoldt, Daniel L.; Peterson, David L.; Silsbee, David G.

    1994-09-01

    Resource inventory and monitoring (I&M) programs in national parks combine multiple objectives in order to create a plan of action over a finite time horizon. Because all program activities are constrained by time and money, it is critical to plan I&M activities that make the best use of available agency resources. However, multiple objectives complicate a relatively straightforward allocation process. The analytic hierarchy process (AHP) offers a structure for multiobjective decision making so that decision-makers’ preferences can be formally incorporated in seeking potential solutions. Within the AHP, inventory and monitoring program objectives and decision criteria are organized into a hierarchy. Pairwise comparisons among decision elements at any level of the hierarchy provide a ratio scale ranking of those elements. The resulting priority values for all projects are used as each project’s contribution to the value of an overall I&M program. These priorities, along with budget and personnel constraints, are formulated as a zero/one integer programming problem that can be solved to select those projects that produce the best program. An extensive example illustrates how this approach is being applied to I&M projects in national parks in the Pacific Northwest region of the United States. The proposed planning process provides an analytical framework for multicriteria decisionmaking that is rational, consistent, explicit, and defensible.

  19. Fast grasping of unknown objects using cylinder searching on a single point cloud

    NASA Astrophysics Data System (ADS)

    Lei, Qujiang; Wisse, Martijn

    2017-03-01

    Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.

  20. What limits tool use in nonhuman primates? Insights from tufted capuchin monkeys (Sapajus spp.) and chimpanzees (Pan troglodytes) aligning three-dimensional objects to a surface

    PubMed Central

    la Cour, L. T.; Stone, B. W.; Hopkins, W.; Menzel, C.; Fragaszy, D.

    2013-01-01

    Perceptuomotor functions that support using hand tools can be examined in other manipulation tasks, such as alignment of objects to surfaces. We examined tufted capuchin monkeys’ and chimpanzees’ performance at aligning objects to surfaces while managing one or two spatial relations to do so. We presented 6 subjects of each species with a single stick to place into a groove, two sticks of equal length to place into two grooves, or two sticks joined as a T to place into a T-shaped groove. Tufted capuchins and chimpanzees performed equivalently on these tasks, aligning the straight stick to within 22.5° of parallel to the groove in approximately half of their attempts to place it, and taking more attempts to place the T stick than two straight sticks. The findings provide strong evidence that tufted capuchins and chimpanzees do not reliably align even one prominent axial feature of an object to a surface, and that managing two concurrent allocentric spatial relations in an alignment problem is significantly more challenging to them than managing two sequential relations. In contrast, humans from two years of age display very different perceptuomotor abilities in a similar task: they align sticks to a groove reliably on each attempt, and they readily manage two allocentric spatial relations concurrently. Limitations in aligning objects and in managing two or more relations at a time significantly constrain how nonhuman primates can use hand tools. PMID:23820935

  1. Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Bai, P.

    2017-12-01

    Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.

  2. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  3. Numerical System Solver Developed for the National Cycle Program

    NASA Technical Reports Server (NTRS)

    Binder, Michael P.

    1999-01-01

    As part of the National Cycle Program (NCP), a powerful new numerical solver has been developed to support the simulation of aeropropulsion systems. This software uses a hierarchical object-oriented design. It can provide steady-state and time-dependent solutions to nonlinear and even discontinuous problems typically encountered when aircraft and spacecraft propulsion systems are simulated. It also can handle constrained solutions, in which one or more factors may limit the behavior of the engine system. Timedependent simulation capabilities include adaptive time-stepping and synchronization with digital control elements. The NCP solver is playing an important role in making the NCP a flexible, powerful, and reliable simulation package.

  4. Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.

    2016-01-01

    The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.

  5. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  6. Reliability of a single objective measure in assessing sleepiness.

    PubMed

    Sunwoo, Bernie Y; Jackson, Nicholas; Maislin, Greg; Gurubhagavatula, Indira; George, Charles F; Pack, Allan I

    2012-01-01

    To evaluate reliability of single objective tests in assessing sleepiness. Subjects who completed polysomnography underwent a 4-nap multiple sleep latency test (MSLT) the following day. Prior to each nap opportunity on MSLT, subjects performed the psychomotor vigilance test (PVT) and divided attention driving task (DADT). Results of single versus multiple test administrations were compared using the intraclass correlation coefficient (ICC) and adjusted for test administration order effects to explore time of day effects. Measures were explored as continuous and binary (i.e., impaired or not impaired). Community-based sample evaluated at a tertiary, university-based sleep center. 372 adult commercial vehicle operators oversampled for increased obstructive sleep apnea risk. N/A. AS CONTINUOUS MEASURES, ICC WERE AS FOLLOWS: MSLT 0.45, PVT median response time 0.69, PVT number of lapses 0.51, 10-min DADT tracking error 0.87, 20-min DADT tracking error 0.90. Based on binary outcomes, ICC were: MSLT 0.63, PVT number of lapses 0.85, 10-min DADT 0.95, 20-min DADT 0.96. Statistically significant time of day effects were seen in both the MSLT and PVT but not the DADT. Correlation between ESS and different objective tests was strongest for MSLT, range [-0.270 to -0.195] and persisted across all time points. Single DADT and PVT administrations are reliable measures of sleepiness. A single MSLT administration can reasonably discriminate individuals with MSL < 8 minutes. These results support the use of a single administration of some objective tests of sleepiness when performed under controlled conditions in routine clinical care.

  7. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.

  8. Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Ross, Brian; Imada, Janine

    Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.

  9. Constructing objective tests

    NASA Astrophysics Data System (ADS)

    Aubrecht, Gordon J.; Aubrecht, Judith D.

    1983-07-01

    True-false or multiple-choice tests can be useful instruments for evaluating student progress. We examine strategies for planning objective tests which serve to test the material covered in science (physics) courses. We also examine strategies for writing questions for tests within a test blueprint. The statistical basis for judging the quality of test items are discussed. Reliability, difficulty, and discrimination indices are defined and examples presented. Our recommendation are rather easily put into practice.

  10. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  11. Quality Evaluation of Raw Moutan Cortex Using the AHP and Gray Correlation-TOPSIS Method

    PubMed Central

    Zhou, Sujuan; Liu, Bo; Meng, Jiang

    2017-01-01

    Background: Raw Moutan cortex (RMC) is an important Chinese herbal medicine. Comprehensive and objective quality evaluation of Chinese herbal medicine has been one of the most important issues in the modern herbs development. Objective: To evaluate and compare the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Materials and Methods: The percentage composition of gallic acid, catechin, oxypaeoniflorin, paeoniflorin, quercetin, benzoylpaeoniflorin, paeonol in different batches of RMC was determined, and then adopting MATLAB programming to construct the gray correlation-TOPSIS assessment model for quality evaluation of RMC. Results: The quality evaluation results of model evaluation and objective evaluation were consistent, reliable, and stable. Conclusion: The model of gray correlation-TOPSIS can be well applied to the quality evaluation of traditional Chinese medicine with multiple components and has broad prospect in application. SUMMARY The experiment tries to construct a model to evaluate the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Results show the model is reliable and provide a feasible way in evaluating quality of traditional Chinese medicine with multiple components. PMID:28839384

  12. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  13. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  14. The Relationship of Expert-System Scored Constrained Free-Response Items to Multiple-Choice and Open-Ended Items.

    ERIC Educational Resources Information Center

    Bennett, Randy Elliot; And Others

    1990-01-01

    The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)

  15. Constrained non-linear multi-objective optimisation of preventive maintenance scheduling for offshore wind farms

    NASA Astrophysics Data System (ADS)

    Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian

    2018-05-01

    Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.

  16. Multiple-foil microabrasion package (A0023)

    NASA Technical Reports Server (NTRS)

    Mcdonnell, J. A. M.; Ashworth, D. G.; Carey, W. C.; Flavill, R. P.; Jennison, R. C.

    1984-01-01

    The specific scientific objectives of this experiment are to measure the spatial distribution, size, velocity, radiance, and composition of microparticles in near-Earth space. The technological objectives are to measure erosion rates resulting from microparticle impacts and to evaluate thin-foil meteor 'bumpers'. The combinations of sensitivity and reliability in this experiment will provide up to 1000 impacts per month for laboratory analysis and will extend current sensitivity limits by 5 orders of magnitude in mass.

  17. Development of a Rubric to Improve Critical Thinking

    ERIC Educational Resources Information Center

    Hildenbrand, Kasee J.; Schultz, Judy A.

    2012-01-01

    Context: Health care professionals, including athletic trainers are confronted daily with multiple complex problems that require critical thinking. Objective: This research attempts to develop a reliable process to assess students' critical thinking in a variety of athletic training and kinesiology courses. Design: Our first step was to create a…

  18. Pareto-Optimal Estimates of California Precipitation Change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  19. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  20. Satellite magnetic modeling of north African hot spots

    NASA Technical Reports Server (NTRS)

    Phillips, R. J.; Brown, C. R.

    1985-01-01

    The primary objectives of the MAGSAT mission was to measure the intensity and direction of magnetization of the Earth's crust. A significant effort was directed to the large crustal anomalies first delineated by the POGO mission. The MAGSAT data are capable of spatial resolution of the crustal field to 250 km wavelength with reliability limits to less than 1 nT in the mean. The difficulties of dealing with less than the most robust of the MAGSAT anomalies is that often there is no more than the magnetic fields themselves to constrain geophysical models of the interior, and no independent means of assessing the quality of the crustal anomaly data in interpreting the subsurface are available.

  1. Constraining uncertainties in water supply reliability in a tropical data scarce basin

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte

    2015-04-01

    Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.

  2. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  3. Optimizing multiple reliable forward contracts for reservoir allocation using multitime scale streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward

    2017-03-01

    Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.

  4. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Brian James

    There is a scientific need to obtain new data to constrain and refine next generation multi-phase equation-of-state (EOS) for metals. Experiments are needed to locate phase boundaries, determine transition kinetic times, and to obtain EOS and Hugoniot data for relevant phases. The objectives of the current work was to examine the multiphase properties for cerium including the dynamic melt boundary and the low-pressure solid-solid phase transition through the critical point. These objectives were addressed by performing plate impact experiment that used multiple experimental configuration including front-surface impact experiments to directly measure transition kinetics, multislug experiments that used the overtake methodmore » to measure sound speeds at pressure, and preheat experiments to map out phase boundaries. Preliminary data and analysis obtained for cerium will be presented.« less

  6. Multi-physics optimization of three-dimensional microvascular polymeric components

    NASA Astrophysics Data System (ADS)

    Aragón, Alejandro M.; Saksena, Rajat; Kozola, Brian D.; Geubelle, Philippe H.; Christensen, Kenneth T.; White, Scott R.

    2013-01-01

    This work discusses the computational design of microvascular polymeric materials, which aim at mimicking the behavior found in some living organisms that contain a vascular system. The optimization of the topology of the embedded three-dimensional microvascular network is carried out by coupling a multi-objective constrained genetic algorithm with a finite-element based physics solver, the latter validated through experiments. The optimization is carried out on multiple conflicting objective functions, namely the void volume fraction left by the network, the energy required to drive the fluid through the network and the maximum temperature when the material is subjected to thermal loads. The methodology presented in this work results in a viable alternative for the multi-physics optimization of these materials for active-cooling applications.

  7. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  8. New nonlinear control algorithms for multiple robot arms

    NASA Technical Reports Server (NTRS)

    Tarn, T. J.; Bejczy, A. K.; Yun, X.

    1988-01-01

    Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.

  9. Three-dimensional object recognition based on planar images

    NASA Astrophysics Data System (ADS)

    Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.

    1993-01-01

    This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.

  10. EXTINCTION LAWS TOWARD STELLAR SOURCES WITHIN A DUSTY CIRCUMSTELLAR MEDIUM AND IMPLICATIONS FOR TYPE IA SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagao, Takashi; Maeda, Keiichi; Nozawa, Takaya, E-mail: nagao@kusastro.kyoto-u.ac.jp

    Many astronomical objects are surrounded by dusty environments. In such dusty objects, multiple scattering processes of photons by circumstellar (CS) dust grains can effectively alter extinction properties. In this paper, we systematically investigate the effects of multiple scattering on extinction laws for steady-emission sources surrounded by the dusty CS medium using a radiation transfer simulation based on the Monte Carlo technique. In particular, we focus on whether and how the extinction properties are affected by properties of CS dust grains by adopting various dust grain models. We confirm that behaviors of the (effective) extinction laws are highly dependent on themore » properties of CS grains, especially the total-to-selective extinction ratio R{sub V}, which characterizes the extinction law and can be either increased or decreased and compared with the case without multiple scattering. We find that the criterion for this behavior is given by a ratio of albedos in the B and V bands. We also find that either small silicate grains or polycyclic aromatic hydrocarbons are necessary for realizing a low value of R{sub V} as often measured toward SNe Ia if the multiple scattering by CS dust is responsible for their non-standard extinction laws. Using the derived relations between the properties of dust grains and the resulting effective extinction laws, we propose that the extinction laws toward dusty objects could be used to constrain the properties of dust grains in CS environments.« less

  11. Evaluation of Ages in the Lunar Highlands with Implications for the Evolution of the Moon

    NASA Astrophysics Data System (ADS)

    Borg, L. E.; Gaffney, A. M.; Carlson, R. W.

    2012-12-01

    The lunar highlands are composed of rocks from the ferroan anorthosite (FAN) and Mg-suites. These samples have been extensively studied because they record most of the major events associated with the formation and evolution of the Earth's Moon. Despite their potential to constrain the timing of these events, chronologic investigations are often ambiguous; in most cases because absolute ages and/or initial isotopic compositions are inconsistent with stratigraphic and petrologic relationships of various rock suites inferred from mineralogical and geochemical studies. The problem is exacerbated by the fact that most samples are difficult to date due to their small size and nearly monomineralic nature, as well as isotopic disturbances associated with impacts. Here several criteria are used to assess the reliability of lunar ages, including: (1) concordance between multiple chronometers, (2) linearity of individual isochrons, (3) resistance of the chronometers to disruption by impact or contamination, (4) consistency between initial isotopic compositions and the petrogenisis of samples, and (5) reasonableness of the elemental concentrations of mineral fractions. If only those samples that meet 4 out of 5 of these criteria are used to constrain lunar chronology many of the apparent conflicts between chronometry and petrology disappear. For example, this analysis demonstrates that the most ancient ages reported for lunar samples are some of the least reliable. The oldest ages determined on both FAN and Mg-suite highland rocks with confidence are in fact ~4.35 Ga. This age is concordant with 142Nd mare source formation ages and a peak in zircon ages, suggesting it represents a major event at ~4.35 Ga. In contrast, several apparently reliable KREEP model ages are older at ~4.48 Ga. If these older model ages are correct, they may represent the solidification age of the Moon, whereas the 4.35 Ga event could reflect secondary magmatism and cumulate re-equilibration associated with density overturn of primordial magma ocean cumulates.

  12. Pursuing the Qualities of a "Good" Test

    ERIC Educational Resources Information Center

    Coniam, David

    2014-01-01

    This article examines the issue of the quality of teacher-produced tests, limiting itself in the current context to objective, multiple-choice tests. The article investigates a short, two-part 20-item English language test. After a brief overview of the key test qualities of reliability and validity, the article examines the two subtests in terms…

  13. Robust X-ray angular correlations for the study of meso-structures

    DOE PAGES

    Lhermitte, Julien R.; Tian, Cheng; Stein, Aaron; ...

    2017-05-08

    As self-assembling nanomaterials become more sophisticated, it is becoming increasingly important to measure the structural order of finite-sized assemblies of nano-objects. These mesoscale clusters represent an acute challenge to conventional structural probes, owing to the range of implicated size scales (10 nm to several micrometres), the weak scattering signal and the dynamic nature of meso-clusters in native solution environments. The high X-ray flux and coherence of modern synchrotrons present an opportunity to extract structural information from these challenging systems, but conventional ensemble X-ray scattering averages out crucial information about local particle configurations. Conversely, a single meso-cluster scatters too weakly tomore » recover the full diffraction pattern. Using X-ray angular cross-correlation analysis, it is possible to combine multiple noisy measurements to obtain robust structural information. This paper explores the key theoretical limits and experimental challenges that constrain the application of these methods to probing structural order in real nanomaterials. A metric is presented to quantify the signal-to-noise ratio of angular correlations, and it is used to identify several experimental artifacts that arise. In particular, it is found that background scattering, data masking and inter-cluster interference profoundly affect the quality of correlation analyses. A robust workflow is demonstrated for mitigating these effects and extracting reliable angular correlations from realistic experimental data.« less

  14. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  15. An experimental investigation of fault tolerant software structures in an avionics application

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Eckhardt, Dave E., Jr.

    1989-01-01

    The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.

  16. Do needs for security and certainty predict cultural and economic conservatism? A cross-national analysis.

    PubMed

    Malka, Ariel; Soto, Christopher J; Inzlicht, Michael; Lelkes, Yphtach

    2014-06-01

    We examine whether individual differences in needs for security and certainty predict conservative (vs. liberal) position on both cultural and economic political issues and whether these effects are conditional on nation-level characteristics and individual-level political engagement. Analyses with cross-national data from 51 nations reveal that valuing conformity, security, and tradition over self-direction and stimulation (a) predicts ideological self-placement on the political right, but only among people high in political engagement and within relatively developed nations, ideologically constrained nations, and non-Eastern European nations, (b) reliably predicts right-wing cultural attitudes and does so more strongly within developed and ideologically constrained nations, and (c) on average predicts left-wing economic attitudes but does so more weakly among people high in political engagement, within ideologically constrained nations, and within non-Eastern European nations. These findings challenge the prevailing view that needs for security and certainty organically yield a broad right-wing ideology and that exposure to political discourse better equips people to select the broad ideology that is most need satisfying. Rather, these findings suggest that needs for security and certainty generally yield culturally conservative but economically left-wing preferences and that exposure to political discourse generally weakens the latter relation. We consider implications for the interactive influence of personality characteristics and social context on political attitudes and discuss the importance of assessing multiple attitude domains, assessing political engagement, and considering national characteristics when studying the psychological origins of political attitudes.

  17. Test-Retest Reliability of the Multiple Sleep Latency Test in Narcolepsy without Cataplexy and Idiopathic Hypersomnia

    PubMed Central

    Trotti, Lynn Marie; Staab, Beth A.; Rye, David B.

    2013-01-01

    Study Objectives: Differentiation of narcolepsy without cataplexy from idiopathic hypersomnia relies entirely upon the multiple sleep latency test (MSLT). However, the test-retest reliability for these central nervous system hypersomnias has never been determined. Methods: Patients with narcolepsy without cataplexy, idiopathic hypersomnia, and physiologic hypersomnia who underwent two diagnostic multiple sleep latency tests were identified retrospectively. Correlations between the mean sleep latencies on the two studies were evaluated, and we probed for demographic and clinical features associated with reproducibility versus change in diagnosis. Results: Thirty-six patients (58% women, mean age 34 years) were included. Inter -test interval was 4.2 ± 3.8 years (range 2.5 months to 16.9 years). Mean sleep latencies on the first and second tests were 5.5 (± 3.7 SD) and 7.3 (± 3.9) minutes, respectively, with no significant correlation (r = 0.17, p = 0.31). A change in diagnosis occurred in 53% of patients, and was accounted for by a difference in the mean sleep latency (N = 15, 42%) or the number of sleep onset REM periods (N = 11, 31%). The only feature predictive of a diagnosis change was a history of hypnagogic or hypnopompic hallucinations. Conclusions: The multiple sleep latency test demonstrates poor test-retest reliability in a clinical population of patients with central nervous system hypersomnia evaluated in a tertiary referral center. Alternative diagnostic tools are needed. Citation: Trotti LM; Staab BA; Rye DB. Test- retest reliability of the multiple sleep latency test in narcolepsy without cataplexy and idiopathic hypersomnia. J Clin Sleep Med 2013;9(8):789-795. PMID:23946709

  18. Utilization of wireless structural health monitoring as decision making tools for a condition and reliability-based assessment of railroad bridges

    NASA Astrophysics Data System (ADS)

    Flanigan, Katherine A.; Johnson, Nephi R.; Hou, Rui; Ettouney, Mohammed; Lynch, Jerome P.

    2017-04-01

    The ability to quantitatively assess the condition of railroad bridges facilitates objective evaluation of their robustness in the face of hazard events. Of particular importance is the need to assess the condition of railroad bridges in networks that are exposed to multiple hazards. Data collected from structural health monitoring (SHM) can be used to better maintain a structure by prompting preventative (rather than reactive) maintenance strategies and supplying quantitative information to aid in recovery. To that end, a wireless monitoring system is validated and installed on the Harahan Bridge which is a hundred-year-old long-span railroad truss bridge that crosses the Mississippi River near Memphis, TN. This bridge is exposed to multiple hazards including scour, vehicle/barge impact, seismic activity, and aging. The instrumented sensing system targets non-redundant structural components and areas of the truss and floor system that bridge managers are most concerned about based on previous inspections and structural analysis. This paper details the monitoring system and the analytical method for the assessment of bridge condition based on automated data-driven analyses. Two primary objectives of monitoring the system performance are discussed: 1) monitoring fatigue accumulation in critical tensile truss elements; and 2) monitoring the reliability index values associated with sub-system limit states of these members. Moreover, since the reliability index is a scalar indicator of the safety of components, quantifiable condition assessment can be used as an objective metric so that bridge owners can make informed damage mitigation strategies and optimize resource management on single bridge or network levels.

  19. A Microsoft Kinect-Based Point-of-Care Gait Assessment Framework for Multiple Sclerosis Patients.

    PubMed

    Gholami, Farnood; Trojan, Daria A; Kovecses, Jozsef; Haddad, Wassim M; Gholami, Behnood

    2017-09-01

    Gait impairment is a prevalent and important difficulty for patients with multiple sclerosis (MS), a common neurological disorder. An easy to use tool to objectively evaluate gait in MS patients in a clinical setting can assist clinicians to perform an objective assessment. The overall objective of this study is to develop a framework to quantify gait abnormalities in MS patients using the Microsoft Kinect for the Windows sensor; an inexpensive, easy to use, portable camera. Specifically, we aim to evaluate its feasibility for utilization in a clinical setting, assess its reliability, evaluate the validity of gait indices obtained, and evaluate a novel set of gait indices based on the concept of dynamic time warping. In this study, ten ambulatory MS patients, and ten age and sex-matched normal controls were studied at one session in a clinical setting with gait assessment using a Kinect camera. The expanded disability status scale (EDSS) clinical ambulation score was calculated for the MS subjects, and patients completed the Multiple Sclerosis walking scale (MSWS). Based on this study, we established the potential feasibility of using a Microsoft Kinect camera in a clinical setting. Seven out of the eight gait indices obtained using the proposed method were reliable with intraclass correlation coefficients ranging from 0.61 to 0.99. All eight MS gait indices were significantly different from those of the controls (p-values less than 0.05). Finally, seven out of the eight MS gait indices were correlated with the objective and subjective gait measures (Pearson's correlation coefficients greater than 0.40). This study shows that the Kinect camera is an easy to use tool to assess gait in MS patients in a clinical setting.

  20. Characteristics of objective daytime sleep among individuals with earthquake-related posttraumatic stress disorder: A pilot community-based polysomnographic and multiple sleep latency test study.

    PubMed

    Zhang, Yan; Li, Yun; Zhu, Hongru; Cui, Haofei; Qiu, Changjian; Tang, Xiangdong; Zhang, Wei

    2017-01-01

    Little is known about the objective sleep characteristics of patients with posttraumatic stress disorder (PTSD). The present study examines the association between PTSD symptom severity and objective daytime sleep characteristics measured using the Multiple Sleep Latency Test (MSLT) in therapy-naïve patients with earthquake-related PTSD. A total of 23 PTSD patients and 13 trauma-exposed non-PTSD (TEN-PTSD) subjects completed one-night in-lab polysomnography (PSG) followed by a standard MSLT. 8 of the 23 PTSD patients received paroxetine treatment. Compared to the TEN-PTSD subjects, no significant nighttime sleep disturbances were detected by PSG in the subjects with PTSD; however, a shorter mean MSLT value was found in the subjects with PTSD. After adjustment for age, sex, and body mass index, PTSD symptoms, particularly hyperarousal, were found to be independently associated with a shorter MSLT value. Further, the mean MSLT value increased significantly after therapy in PTSD subjects. A shorter MSLT value may be a reliable index of the medical severity of PTSD, while an improvement in MSLT values might also be a reliable marker for evaluating therapeutic efficacy in PTSD patients. Copyright © 2016. Published by Elsevier Ireland Ltd.

  1. Multi-objective Decision Based Available Transfer Capability in Deregulated Power System Using Heuristic Approaches

    NASA Astrophysics Data System (ADS)

    Pasam, Gopi Krishna; Manohar, T. Gowri

    2016-09-01

    Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.

  2. Light, nutrients, and food-chain length constrain planktonic energy transfer efficiency across multiple trophic levels

    PubMed Central

    Dickman, Elizabeth M.; Newell, Jennifer M.; González, María J.; Vanni, Michael J.

    2008-01-01

    The efficiency of energy transfer through food chains [food chain efficiency (FCE)] is an important ecosystem function. It has been hypothesized that FCE across multiple trophic levels is constrained by the efficiency at which herbivores use plant energy, which depends on plant nutritional quality. Furthermore, the number of trophic levels may also constrain FCE, because herbivores are less efficient in using plant production when they are constrained by carnivores. These hypotheses have not been tested experimentally in food chains with 3 or more trophic levels. In a field experiment manipulating light, nutrients, and food-chain length, we show that FCE is constrained by algal food quality and food-chain length. FCE across 3 trophic levels (phytoplankton to carnivorous fish) was highest under low light and high nutrients, where algal quality was best as indicated by taxonomic composition and nutrient stoichiometry. In 3-level systems, FCE was constrained by the efficiency at which both herbivores and carnivores converted food into production; a strong nutrient effect on carnivore efficiency suggests a carryover effect of algal quality across 3 trophic levels. Energy transfer efficiency from algae to herbivores was also higher in 2-level systems (without carnivores) than in 3-level systems. Our results support the hypothesis that FCE is strongly constrained by light, nutrients, and food-chain length and suggest that carryover effects across multiple trophic levels are important. Because many environmental perturbations affect light, nutrients, and food-chain length, and many ecological services are mediated by FCE, it will be important to apply these findings to various ecosystem types. PMID:19011082

  3. Radar based autonomous sensor module

    NASA Astrophysics Data System (ADS)

    Styles, Tim

    2016-10-01

    Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.

  4. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  5. Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.

    PubMed

    Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald

    2017-12-14

    The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.

  6. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  7. The Origin of Stellar Species: constraining stellar evolution scenarios with Local Group galaxy surveys

    NASA Astrophysics Data System (ADS)

    Sarbadhicary, Sumit; Badenes, Carles; Chomiuk, Laura; Maldonado, Jessica; Caprioli, Damiano; Heger, Mairead; Huizenga, Daniel

    2018-01-01

    Our understanding of the progenitors of many stellar species, such as supernovae, massive and low-mass He-burning stars, is limited because of many poorly constrained aspects of stellar evolution theory. For my dissertation, I have focused on using Local Group galaxy surveys to constrain stellar evolution scenarios by measuring delay-time distributions (DTD). The DTD is the hypothetical occurrence rate of a stellar object per elapsed time after a brief burst of star formation. It is the measured distribution of timescales on which stars evolve, and therefore serves as a powerful observational constraint on theoretical progenitor models. The DTD can be measured from a survey of stellar objects and a set of star-formation histories of the host galaxy, and is particularly effective in the Local Group, where high-quality star-formation histories are available from resolved stellar populations. I am currently calculating a SN DTD with supernova remnants (SNRs) in order to provide the strongest constraints on the progenitors of thermonuclear and core-collapse supernovae. However, most SNRs do not have reliable age measurements and their evolution depends on the ambient environment. For this reason, I wrote a radio light curve model of an SNR population to extract the visibility times and rates of supernovae - crucial ingredients for the DTD - from an SNR survey. The model uses observational constraints on the local environments from multi-wavelength surveys, accounts for missing SNRs and employs the latest models of shock-driven particle acceleration. The final calculation of the SN DTD in the Local Group is awaiting completion of a systematic SNR catalog from deep radio-continuum images, now in preparation by a group led by Dr. Laura Chomiuk. I have also calculated DTDs for the LMC population of RR Lyrae and Cepheid variables, which serve as important distance calibrators and stellar population tracers. We find that Cepheids can have delay-times between 10 Myrs - 1 Gyr, while RR Lyrae can have delay-times < 10 Gyrs. These observations cannot be explained by models using mass and metallicity alone. In future projects, I will apply the DTD technique to constrain the supergiant and pre-supernova evolutionary models.

  8. Space Launch System (SLS) Safety, Mission Assurance, and Risk Mitigation

    NASA Technical Reports Server (NTRS)

    May, Todd

    2013-01-01

    SLS Driving Objectives: I. Safe: a) Human-rated to provide safe and reliable systems for human missions. b) Protecting the public, NASA workforce, high-value equipment and property, and the environment from potential harm. II. Affordable: a) Maximum use of common elements and existing assets, infrastructure, and workforce. b) Constrained budget environment. c) Competitive opportunities for affordability on-ramps. III. Sustainable: a) Initial capability: 70 metric tons (t), 2017-2021. 1) Serves as primary transportation for Orion and exploration missions. 2) Provides back-up capability for crew/cargo to ISS. b) Evolved capability: 105 t and 130 t, post-2021. 1) Offers large volume for science missions and payloads. 2) Modular and flexible, right-sized for mission requirements.

  9. Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset

    NASA Astrophysics Data System (ADS)

    Eyre, T.; Van der Baan, M.

    2016-12-01

    Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and mechanisms are calculated and the results discussed, including the biases caused by the array geometry. The majority of the events are located within the target reservoir, however a small, seemingly disconnected cluster of events appears 100 m above the reservoir.

  10. Silviculture for multiple objectives in the Douglas-fir region.

    Treesearch

    R.O. Curtis; D.S. DeBell; C.A. Harrington; D.P. Lavender; J.B. St. Clair; J.C. Tappeiner; J.D. Walstad

    1998-01-01

    Silvicultural knowledge and practice have been evolving in the Pacific Northwest for nearly a century. Most research and management activities to date have focused on two major topics: (1) methods to regenerate older, naturally established forests after fire or timber harvest; and (2) growth and management of young stands. Today forest managers can reliably regenerate...

  11. Design and Implementation of Replicated Object Layer

    NASA Technical Reports Server (NTRS)

    Koka, Sudhir

    1996-01-01

    One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.

  12. Flight Testing of the Capillary Pumped Loop 3 Experiment

    NASA Technical Reports Server (NTRS)

    Ottenstein, Laura; Butler, Dan; Ku, Jentung; Cheung, Kwok; Baldauff, Robert; Hoang, Triem

    2002-01-01

    The Capillary Pumped Loop 3 (CAPL 3) experiment was a multiple evaporator capillary pumped loop experiment that flew in the Space Shuttle payload bay in December 2001 (STS-108). The main objective of CAPL 3 was to demonstrate in micro-gravity a multiple evaporator capillary pumped loop system, capable of reliable start-up, reliable continuous operation, and heat load sharing, with hardware for a deployable radiator. Tests performed on orbit included start-ups, power cycles, low power tests (100 W total), high power tests (up to 1447 W total), heat load sharing, variable/fixed conductance transition tests, and saturation temperature change tests. The majority of the tests were completed successfully, although the experiment did exhibit an unexpected sensitivity to shuttle maneuvers. This paper describes the experiment, the tests performed during the mission, and the test results.

  13. DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.

    PubMed

    Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou

    2016-07-07

    In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.

  14. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  15. Interactive Multiple Object Tracking (iMOT)

    PubMed Central

    Thornton, Ian M.; Bülthoff, Heinrich H.; Horowitz, Todd S.; Rynning, Aksel; Lee, Seong-Whan

    2014-01-01

    We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context. PMID:24498288

  16. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  17. Navigating Financial and Supply Reliability Tradeoffs in Regional Drought Portfolios

    NASA Astrophysics Data System (ADS)

    Zeff, H. B.; Herman, J. D.; Characklis, G. W.; Reed, P. M.

    2013-12-01

    Rising development costs and growing concerns over environmental impacts have led many communities to explore more diversified regional portfolio-type approaches to managing their water supplies. These strategies coordinate existing supply infrastructure with other ';assets' such as conservation measures or water transfers, reducing the capacity and costs required to meet demand by providing greater adaptability to changing hydrologic conditions. For many water utilities, however, this additional flexibility can also cause unexpected reductions in revenue (i.e. conservation) or increased costs (i.e. transfers), fluctuations that can be very difficult for a regulated entity to manage. Thus, despite the advantages, concerns over the resulting financial disruptions provide a disincentive for utilities to develop more adaptive methods, potentially limiting the role of some very effective tools. This study seeks to design portfolio strategies that employ financial instruments (e.g. contingency funds, index insurance) to reduce fluctuations in revenues and costs and therefore do not sacrifice financial stability for improved performance (e.g. lower expected costs, high reliability). This work describes the development of regional water supply portfolios in the ';Research Triangle' region of North Carolina, an area comprising four rapidly growing municipalities supplied by nine surface water reservoirs in two separate river basins. Disparities in growth rates and the respective individual storage capacities of the reservoirs provide the region with the opportunity to increase the efficiency of the regional supply infrastructure through inter-utility water transfers, even as each utility engages in its own conservation activities. The interdependence of multiple utilities navigating shared conveyance and treatment infrastructure to engage in transfers forces water managers to consider regional objectives, as the actions of any one utility can affect the others. Results indicate the inclusion of inter-utility water transfers allows the water utilities to improve on regional operational objectives (i.e. higher reliability and lower restriction frequencies) at a lower expected cost, while financial mitigation tools introduce a tradeoff between expected costs and cost variability. Financial mitigation schemes, including both third-party financial insurance contracts and contingency funds (i.e. self-insurance), were able to reduce cost variability at a lower expected cost than mitigation schemes which use self-insurance alone. The dynamics of the Research Triangle scenario (e.g. rapid population growth, constrained supply, and sensitivity to cost/revenue swings) suggest that this work may have the potential to more generally inform utilities on the effects of coordinated regional water supply planning and the resulting financial implications of more flexible, portfolio-type management techniques.

  18. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  19. M-OSCE as a method to measure dental hygiene students' critical thinking: a pilot study.

    PubMed

    McComas, Martha J; Wright, Rebecca A; Mann, Nancy K; Cooper, Mary D; Jacks, Mary E

    2013-04-01

    Educators in all academic disciplines have been encouraged to utilize assessment strategies to evaluate students' critical thinking. The purpose of this study was to assess the viability of the modified objective structured clinical examination (m-OSCE) to evaluate critical thinking in dental hygiene education. This evaluation utilized a convenience sample of senior dental hygiene students. Students participated in the m-OSCE in which portions of a patient case were revealed at four stations. The exam consisted of multiple-choice questions intended to measure students' ability to utilize critical thinking skills. Additionally, there was one fill-in-the-blank question and a treatment plan that was completed at the fifth station. The results of this study revealed that the m-OSCE did not reliably measure dental hygiene students' critical thinking. Statistical analysis found no satisfactory reliability within the multiple-choice questions and moderately reliable results within the treatment planning portion of the examination. In addition, the item analysis found gaps in students' abilities to transfer clinical evidence/data to basic biomedical knowledge as demonstrated through the multiple-choice questioning results. This outcome warrants further investigation of the utility of the m-OSCE, with a focus on modifications to the evaluation questions, grading rubric, and patient case.

  20. Risk-based analysis and decision making in multi-disciplinary environments

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Cornford, Steven L.; Moran, Kelly

    2003-01-01

    A risk-based decision-making process conceived of and developed at JPL and NASA, has been used to help plan and guide novel technology applications for use on spacecraft. These applications exemplify key challenges inherent in multi-disciplinary design of novel technologies deployed in mission-critical settings. 1) Cross-disciplinary concerns are numerous (e.g., spacecraft involve navigation, propulsion, telecommunications). These concems are cross-coupled and interact in multiple ways (e.g., electromagnetic interference, heat transfer). 2) Time and budget pressures constrain development, operational resources constrain the resulting system (e.g., mass, volume, power). 3) Spacecraft are critical systems that must operate correctly the first time in only partially understood environments, with no chance for repair. 4) Past experience provides only a partial guide: New mission concepts are enhanced and enabled by new technologies, for which past experience is lacking. The decision-making process rests on quantitative assessments of the relationships between three classes of information - objectives (the things the system is to accomplish and constraints on its operation and development), risks (whose occurrence detracts from objectives), and mitigations (options for reducing the likelihood and or severity of risks). The process successfully guides experts to pool their knowledge, using custom-built software to support information gathering and decision-making.

  1. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  2. A dissociation of objective and subjective workload measures in assessing the impact of speech controls in advanced helicopters

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Bortolussi, Michael R.

    1988-01-01

    Among the new technologies that are expected to aid helicopter designers are speech controls. Proponents suggest that speech controls could reduce the potential for manual control overloads and improve time-sharing performance in environments that have heavy demands for manual control. This was tested in a simulation of an advanced single-pilot, scout/attack helicopter. Objective performance indicated that the speech controls were effective in decreasing the interference of discrete responses during moments of heavy flight control activity. However, subjective ratings indicated that the use of speech controls required extra effort to speak precisely and to attend to feedback. Although the operational reliability of speech controls must be improved, the present results indicate that reliable speech controls could enhance the time-sharing efficiency of helicopter pilots. Furthermore, the results demonstrated the importance of using multiple assessment techniques to completely assess a task. Neither the objective nor the subjective measures alone provided complete information. It was the contrast between the measures that was most informative.

  3. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  4. Inhibitory Control Interacts with Core Knowledge in Toddlers' Manual Search for an Occluded Object

    ERIC Educational Resources Information Center

    Baker, Sara T.; Gjersoe, Nathalia L.; Sibielska-Woch, Kasia; Leslie, Alan M.; Hood, Bruce M.

    2011-01-01

    Core knowledge theories advocate the primacy of fundamental principles that constrain cognitive development from early infancy. However, there is concern that core knowledge of object properties does not constrain older preschoolers' reasoning during manual search. Here we address in detail both failure and success on two well-established search…

  5. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    ERIC Educational Resources Information Center

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  6. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  7. Smart Acoustic Network Using Combined FSK-PSK, Adaptive Beamforming and Equalization

    DTIC Science & Technology

    2002-09-30

    sonar data transmission from underwater vehicle during mission. The two-year objectives for the high-reliability acoustic network using multiple... sonar laboratory) and used for acoustic networking during underwater vehicle operation. The joint adaptive coherent path beamformer method consists...broadband communications transducer, while the low noise preamplifier conditions received signals for analog to digital conversion. External user

  8. Smart Acoustic Network Using Combined FSK-PSK, Adaptive, Beamforming and Equalization

    DTIC Science & Technology

    2001-09-30

    sonar data transmission from underwater vehicle during mission. The two-year objectives for the high-reliability acoustic network using multiple... sonar laboratory) and used for acoustic networking during underwater vehicle operation. The joint adaptive coherent path beamformer method consists...broadband communications transducer, while the low noise preamplifier conditions received signals for analog to digital conversion. External user

  9. The Role of Age and Setting in Adolescents' First Drinking Experience for Predicting College Problem Drinking

    ERIC Educational Resources Information Center

    Yaeger, Jeffrey P.; Moreno, Megan A.

    2017-01-01

    Objective: The purpose of this study was to determine the reliability of longitudinally reporting age at first drink (AFD), and to test AFD and setting of first drink (SFD) as predictors of collegiate problem drinking. Participants: 338 first-year college students were interviewed multiple times during their first academic year, from May 2011…

  10. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.

  11. Improved method for retinotopy constrained source estimation of visual evoked responses

    PubMed Central

    Hagler, Donald J.; Dale, Anders M.

    2011-01-01

    Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418

  12. Reliability of Classifying Multiple Sclerosis Disease Activity Using Magnetic Resonance Imaging in a Multiple Sclerosis Clinic

    PubMed Central

    Altay, Ebru Erbayat; Fisher, Elizabeth; Jones, Stephen E.; Hara-Cleaver, Claire; Lee, Jar-Chi; Rudick, Richard A.

    2013-01-01

    Objective To assess the reliability of new magnetic resonance imaging (MRI) lesion counts by clinicians in a multiple sclerosis specialty clinic. Design An observational study. Setting A multiple sclerosis specialty clinic. Patients Eighty-five patients with multiple sclerosis participating in a National Institutes of Health–supported longitudinal study were included. Intervention Each patient had a brain MRI scan at entry and 6 months later using a standardized protocol. Main Outcome Measures The number of new T2 lesions, newly enlarging T2 lesions, and gadolinium-enhancing lesions were measured on the 6-month MRI using a computer-based image analysis program for the original study. For this study, images were reanalyzed by an expert neuroradiologist and 3 clinician raters. The neuroradiologist evaluated the original image pairs; the clinicians evaluated image pairs that were modified to simulate clinical practice. New lesion counts were compared across raters, as was classification of patients as MRI active or inactive. Results Agreement on lesion counts was highest for gadolinium-enhancing lesions, intermediate for new T2 lesions, and poor for enlarging T2 lesions. In 18% to 25% of the cases, MRI activity was classified differently by the clinician raters compared with the neuroradiologist or computer program. Variability among the clinical raters for estimates of new T2 lesions was affected most strongly by the image modifications that simulated low image quality and different head position. Conclusions Between-rater variability in new T2 lesion counts may be reduced by improved standardization of image acquisitions, but this approach may not be practical in most clinical environments. Ultimately, more reliable, robust, and accessible image analysis methods are needed for accurate multiple sclerosis disease-modifying drug monitoring and decision making in the routine clinic setting. PMID:23599930

  13. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  14. Chandra and XMM Observations of the ADC Source 0921-630

    NASA Technical Reports Server (NTRS)

    Kallman, T. R.; Angelini, L.; Boroson, B.; Cottam, J.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We analyze observations of the low mass X-ray binary 2S0921-63 obtained with the gratings and CCDs on Chandra and XMM. This object is a high inclination system showing evidence for an accretion disk corona (ADC). Such a corona has the potential to constrain the properties of the heated accretion disk in this system, and other LMXBs by extension. We find evidence for line emission which is generally consistent with that found by previous experiments, although we are able to detect more lines. For the first time in this source, we find that the iron K line has multiple components. We set limits on the line widths and velocity offsets, and we fit the spectra to photoionization models and discuss the implications for accretion disk corona models. For the first time in any ADC source we use these fits, together with density constraints based on the O VII line ratio, in order to constrain the flux in the medium-ionization region of the ADC. Under various assumptions about the source luminosity this constrains the location of the emitting region. These estimates, together with estimates for the emission measure, favor a scenario in which the intrinsic luminosity of the source is comparable to what we observe.

  15. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL).

    PubMed

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-08-10

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  16. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL)

    PubMed Central

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-01-01

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds. PMID:26266411

  17. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    PubMed

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  18. Modular multiplication in GF(p) for public-key cryptography

    NASA Astrophysics Data System (ADS)

    Olszyna, Jakub

    Modular multiplication forms the basis of modular exponentiation which is the core operation of the RSA cryptosystem. It is also present in many other cryptographic algorithms including those based on ECC and HECC. Hence, an efficient implementation of PKC relies on efficient implementation of modular multiplication. The paper presents a survey of most common algorithms for modular multiplication along with hardware architectures especially suitable for cryptographic applications in energy constrained environments. The motivation for studying low-power and areaefficient modular multiplication algorithms comes from enabling public-key security for ultra-low power devices that can perform under constrained environments like wireless sensor networks. Serial architectures for GF(p) are analyzed and presented. Finally proposed architectures are verified and compared according to the amount of power dissipated throughout the operation.

  19. The Differential Effects of Reward on Space- and Object-Based Attentional Allocation

    PubMed Central

    Shomstein, Sarah

    2013-01-01

    Estimating reward contingencies and allocating attentional resources to a subset of relevant information are the most important contributors to increasing adaptability of an organism. Although recent evidence suggests that reward- and attention-based guidance recruits overlapping cortical regions and has similar effects on sensory responses, the exact nature of the relationship between the two remains elusive. Here, using event-related fMRI on human participants, we contrasted the effects of reward on space- and object-based selection in the same experimental setting. Reward was either distributed randomly or biased a particular object. Behavioral and neuroimaging results show that space- and object-based attention is influenced by reward differentially. Space-based attentional allocation is mandatory, integrating reward information over time, whereas object-based attentional allocation is a default setting that is completely replaced by the reward signal. Nonadditivity of the effects of reward and object-based attention was observed consistently at multiple levels of analysis in early visual areas as well as in control regions. These results provide strong evidence that space- and object-based allocation are two independent attentional mechanisms, and suggest that reward serves to constrain attentional selection. PMID:23804086

  20. The Essential Properties of Yoga Questionnaire (EPYQ): Psychometric Properties.

    PubMed

    Park, Crystal L; Elwy, A Rani; Maiya, Meghan; Sarkin, Andrew J; Riley, Kristen E; Eisen, Susan V; Gutierrez, Ian; Finkelstein-Fox, Lucy; Lee, Sharon Y; Casteel, Danielle; Braun, Tosca; Groessl, Erik J

    2018-03-02

    Yoga interventions are heterogeneous and vary along multiple dimensions. These dimensions may affect mental and physical health outcomes in different ways or through different mechanisms. However, most studies of the effects of yoga on health do not adequately describe or quantify the components of the interventions being implemented. This lack of detail prevents researchers from making comparisons across studies and limits our understanding of the relative effects of different aspects of yoga interventions. To address this problem, we developed the Essential Properties of Yoga Questionnaire (EPYQ), which allows researchers to objectively characterize their interventions. We present here the reliability and validity data from the final phases of this measure-development project. Analyses identified fourteen key dimensions of yoga interventions measured by the EPYQ: acceptance/compassion, bandhas, body awareness, breathwork, instructor mention of health benefits, individual attention, meditation and mindfulness, mental and emotional awareness, physicality, active postures, restorative postures, social aspects, spirituality, and yoga philosophy. The EPYQ demonstrated good reliability, as assessed by internal consistency and test-retest reliability analysis, and evidence suggests that the EPYQ is a valid measure of multiple dimensions of yoga. The measure is ready for use by clinicians and researchers. Results indicate that, currently, trained objective raters should score interventions to avoid reference frame errors and potential rating bias, but alternative approaches may be developed. The EPYQ will allow researchers to link specific yoga dimensions to identifiable health outcomes and optimize the design of yoga interventions for specific conditions.

  1. A new potential for the numerical simulations of electrolyte solutions on a hypersphere

    NASA Astrophysics Data System (ADS)

    Caillol, Jean-Michel

    1993-12-01

    We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.

  2. Comments on "The multisynapse neural network and its application to fuzzy clustering".

    PubMed

    Yu, Jian; Hao, Pengwei

    2005-05-01

    In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.

  3. Validating Neuro-QoL short forms and targeted scales with people who have multiple sclerosis.

    PubMed

    Miller, Deborah M; Bethoux, Francois; Victorson, David; Nowinski, Cindy J; Buono, Sarah; Lai, Jin-Shei; Wortman, Katy; Burns, James L; Moy, Claudia; Cella, David

    2016-05-01

    Multiple sclerosis (MS) is a chronic, progressive, and disabling disease of the central nervous system with dramatic variations in the combination and severity of symptoms it can produce. The lack of reliable disease-specific health-related quality of life (HRQL) measures for use in clinical trials prompted the development of the Neurology Quality of Life (Neuro-QOL) instrument, which includes 13 scales that assess physical, emotional, cognitive, and social domains, for use in a variety of neurological illnesses. The objective of this research paper is to conduct an initial assessment of the reliability and validation of the Neuro-QOL short forms (SFs) in MS. We assessed reliability, concurrent validity, known groups validity, and responsiveness between cross-sectional and longitudinal data in 161 recruited MS patients. Internal consistency was high for all measures (α = 0.81-0.95) and ICCs were within the acceptable range (0.76-0.91); concurrent and known groups validity were highest with the Global HRQL question. Longitudinal assessment was limited by the lack of disease progression in the group. The Neuro-QOL SFs demonstrate good internal consistency, test-re-test reliability, and concurrent and known groups validity in this MS population, supporting the validity of Neuro-QOL in adults with MS. © The Author(s), 2015.

  4. Optimal SSN Tasking to Enhance Real-time Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.

    2016-09-01

    Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.

  5. Integrated models to support multiobjective ecological restoration decisions.

    PubMed

    Fraser, Hannah; Rumpff, Libby; Yen, Jian D L; Robinson, Doug; Wintle, Brendan A

    2017-12-01

    Many objectives motivate ecological restoration, including improving vegetation condition, increasing the range and abundance of threatened species, and improving species richness and diversity. Although models have been used to examine the outcomes of ecological restoration, few researchers have attempted to develop models to account for multiple, potentially competing objectives. We developed a combined state-and-transition, species-distribution model to predict the effects of restoration actions on vegetation condition and extent, bird diversity, and the distribution of several bird species in southeastern Australian woodlands. The actions reflected several management objectives. We then validated the models against an independent data set and investigated how the best management decision might change when objectives were valued differently. We also used model results to identify effective restoration options for vegetation and bird species under a constrained budget. In the examples we evaluated, no one action (improving vegetation condition and extent, increasing bird diversity, or increasing the probability of occurrence for threatened species) provided the best outcome across all objectives. In agricultural lands, the optimal management actions for promoting the occurrence of the Brown Treecreeper (Climacteris picumnus), an iconic threatened species, resulted in little improvement in the extent of the vegetation and a high probability of decreased vegetation condition. This result highlights that the best management action in any situation depends on how much the different objectives are valued. In our example scenario, no management or weed control were most likely to be the best management options to satisfy multiple restoration objectives. Our approach to exploring trade-offs in management outcomes through integrated modeling and structured decision-support approaches has wide application for situations in which trade-offs exist between competing conservation objectives. © 2017 Society for Conservation Biology.

  6. Development and validation of a visual grading scale for assessing image quality of AP pelvis radiographic images

    PubMed Central

    England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter

    2016-01-01

    Objective: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Methods: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. Results: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). Conclusion: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. Advances in knowledge: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality. PMID:26943836

  7. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  8. FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation

    NASA Technical Reports Server (NTRS)

    Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank

    2004-01-01

    Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.

  9. Toward automated formation of microsphere arrangements using multiplexed optical tweezers

    NASA Astrophysics Data System (ADS)

    Rajasekaran, Keshav; Bollavaram, Manasa; Banerjee, Ashis G.

    2016-09-01

    Optical tweezers offer certain advantages such as multiplexing using a programmable spatial light modulator, flexibility in the choice of the manipulated object and the manipulation medium, precise control, easy object release, and minimal object damage. However, automated manipulation of multiple objects in parallel, which is essential for efficient and reliable formation of micro-scale assembly structures, poses a difficult challenge. There are two primary research issues in addressing this challenge. First, the presence of stochastic Langevin force giving rise to Brownian motion requires motion control for all the manipulated objects at fast rates of several Hz. Second, the object dynamics is non-linear and even difficult to represent analytically due to the interaction of multiple optical traps that are manipulating neighboring objects. As a result, automated controllers have not been realized for tens of objects, particularly with three dimensional motions with guaranteed collision avoidances. In this paper, we model the effect of interacting optical traps on microspheres with significant Brownian motions in stationary fluid media, and develop simplified state-space representations. These representations are used to design a model predictive controller to coordinate the motions of several spheres in real time. Preliminary experiments demonstrate the utility of the controller in automatically forming desired arrangements of varying configurations starting with randomly dispersed microspheres.

  10. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    NASA Astrophysics Data System (ADS)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  11. A 3000-year record of ground-rupturing earthquakes along the central North Anatolian fault near Lake Ladik, Turkey

    USGS Publications Warehouse

    Fraser, J.; Pigati, J.S.; Hubert-Ferrari, A.; Vanneste, K.; Avsar, U.; Altinok, S.

    2009-01-01

    The North Anatolian fault (NAF) is a ???1500 km long, arcuate, dextral strike-slip fault zone in northern Turkey that extends from the Karliova triple junction to the Aegean Sea. East of Bolu, the fault zone exhibits evidence of a sequence of large (Mw >7) earthquakes that occurred during the twentieth century that displayed a migrating earthquake sequence from east to west. Prolonged human occupation in this region provides an extensive, but not exhaustive, historical record of large earthquakes prior to the twentieth century that covers much of the last 2000 yr. In this study, we extend our knowledge of rupture events in the region by evaluating the stratigraphy and chronology of sediments exposed in a paleoseismic trench across a splay of the NAF at Destek, ???6:5 km east of Lake Ladik (40.868?? N, 36.121?? E). The trenched fault strand forms an uphill-facing scarp and associated sediment trap below a small catchment area. The trench exposed a narrow fault zone that has juxtaposed a sequence of weakly defined paleosols interbedded with colluvium against highly fractured bedrock. We mapped magnetic susceptibility variations on the trench walls and found evidence for multiple visually unrecognized colluvial wedges. This technique was also used to constrain a predominantly dip-slip style of displacement on this fault splay. Sediments exposed in the trench were dated using both charcoal and terrestrial gastropod shells to constrain the timing of the earthquake events. While the gastropod shells consistently yielded 14 C ages that were too old (by ???900 yr), we obtained highly reliable 14 C ages from the charcoal by dating multiple components of the sample material. Our radiocarbon chronology constrains the timing of seven large earthquakes over the past 3000 yr prior to the 1943 Tosya earthquake, including event ages of (2?? error): A.D. 1437-1788, A.D. 1034-1321, A.D. 549-719, A.D. 17-585 (1-3 events), 35 B.C.-A.D. 28, 700-392 B.C., 912-596 B.C. Our results indicate an average interevent time of 385 166?? yr (1??).

  12. Blinded evaluation of interrater reliability of an operative competency assessment tool for direct laryngoscopy and rigid bronchoscopy.

    PubMed

    Ishman, Stacey L; Benke, James R; Johnson, Kaalan Erik; Zur, Karen B; Jacobs, Ian N; Thorne, Marc C; Brown, David J; Lin, Sandra Y; Bhatti, Nasir; Deutsch, Ellen S

    2012-10-01

    OBJECTIVES To confirm interrater reliability using blinded evaluation of a skills-assessment instrument to assess the surgical performance of resident and fellow trainees performing pediatric direct laryngoscopy and rigid bronchoscopy in simulated models. DESIGN Prospective, paired, blinded observational validation study. SUBJECTS Paired observers from multiple institutions simultaneously evaluated residents and fellows who were performing surgery in an animal laboratory or using high-fidelity manikins. The evaluators had no previous affiliation with the residents and fellows and did not know their year of training. INTERVENTIONS One- and 2-page versions of an objective structured assessment of technical skills (OSATS) assessment instrument composed of global and a task-specific surgical items were used to evaluate surgical performance. RESULTS Fifty-two evaluations were completed by 17 attending evaluators. The instrument agreement for the 2-page assessment was 71.4% when measured as a binary variable (ie, competent vs not competent) (κ = 0.38; P = .08). Evaluation as a continuous variable revealed a 42.9% percentage agreement (κ = 0.18; P = .14). The intraclass correlation was 0.53, considered substantial/good interrater reliability (69% reliable). For the 1-page instrument, agreement was 77.4% when measured as a binary variable (κ = 0.53, P = .0015). Agreement when evaluated as a continuous measure was 71.0% (κ = 0.54, P < .001). The intraclass correlation was 0.73, considered high interrater reliability (85% reliable). CONCLUSIONS The OSATS assessment instrument is an effective tool for evaluating surgical performance among trainees with acceptable interrater reliability in a simulator setting. Reliability was good for both the 1- and 2-page OSATS checklists, and both serve as excellent tools to provide immediate formative feedback on operational competency.

  13. Measurement of in vivo local shear modulus using MR elastography multiple-phase patchwork offsets.

    PubMed

    Suga, Mikio; Matsuda, Tetsuya; Minato, Kotaro; Oshiro, Osamu; Chihara, Kunihiro; Okamoto, Jun; Takizawa, Osamu; Komori, Masaru; Takahashi, Takashi

    2003-07-01

    Magnetic resonance elastography (MRE) is a method that can visualize the propagating and standing shear waves in an object being measured. The quantitative value of a shear modulus can be calculated by estimating the local shear wavelength. Low-frequency mechanical motion must be used for soft, tissue-like objects because a propagating shear wave rapidly attenuates at a higher frequency. Moreover, a propagating shear wave is distorted by reflections from the boundaries of objects. However, the distortions are minimal around the wave front of the propagating shear wave. Therefore, we can avoid the effect of reflection on a region of interest (ROI) by adjusting the duration of mechanical vibrations. Thus, the ROI is often shorter than the propagating shear wavelength. In the MRE sequence, a motion-sensitizing gradient (MSG) is synchronized with mechanical cyclic motion. MRE images with multiple initial phase offsets can be generated with increasing delays between the MSG and mechanical vibrations. This paper proposes a method for measuring the local shear wavelength using MRE multiple initial phase patchwork offsets that can be used when the size of the object being measured is shorter than the local wavelength. To confirm the reliability of the proposed method, computer simulations, a simulated tissue study and in vitro and in vivo studies were performed.

  14. Pareto frontier analyses based decision making tool for transportation of hazardous waste.

    PubMed

    Das, Arup; Mazumder, T N; Gupta, A K

    2012-08-15

    Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  16. A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.

    2012-04-01

    This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

  17. Sequential detection of temporal communities by estrangement confinement.

    PubMed

    Kawadia, Vikas; Sreenivasan, Sameet

    2012-01-01

    Temporal communities are the result of a consistent partitioning of nodes across multiple snapshots of an evolving network, and they provide insights into how dense clusters in a network emerge, combine, split and decay over time. To reliably detect temporal communities we need to not only find a good community partition in a given snapshot but also ensure that it bears some similarity to the partition(s) found in the previous snapshot(s), a particularly difficult task given the extreme sensitivity of community structure yielded by current methods to changes in the network structure. Here, motivated by the inertia of inter-node relationships, we present a new measure of partition distance called estrangement, and show that constraining estrangement enables one to find meaningful temporal communities at various degrees of temporal smoothness in diverse real-world datasets. Estrangement confinement thus provides a principled approach to uncovering temporal communities in evolving networks.

  18. The reliability and validity study of the Kinesthetic and Visual Imagery Questionnaire in individuals with Multiple Sclerosis

    PubMed Central

    Tabrizi, Yousef Moghadas; Zangiabadi, Nasser; Mazhari, Shahrzad; Zolala, Farzaneh

    2013-01-01

    Objective Motor imagery (MI) has been recently considered as an adjunct to physical rehabilitation in patients with multiple sclerosis (MS). It is necessary to assess MI abilities and benefits in patients with MS by using a reliable tool. The Kinesthetic and Visual Imagery Questionnaire (KVIQ) was recently developed to assess MI ability in patients with stroke and other disabilities. Considering the different underlying pathologies, the present study aimed to examine the validity and reliability of the KVIQ in MS patients. Method Fifteen MS patients were assessed using the KVIQ in 2 sessions (5-14days apart) by the same examiner. In the second session, the participants also completed a revised MI questionnaire (MIQ-R) as the gold standard. Intra-class correlation coefficients (ICCs) were measured to determine test-retest reliability. Spearman's correlation analysis was performed to assess concurrent validity with the MIQ-R. Furthermore, the internal consistency (Cronbach's alpha) and factorial structure of the KVIQ were studied. Results The test-retest reliability for the KVIQ was good (ICCs: total KVIQ=0.89, visual KVIQ=0.85, and kinesthetic KVIQ=0.93), and the concurrent validity between the KVIQ and MIQ-R was good (r=0.79). The KVIQ had good internal consistency, with high Cronbach's alpha (alpha=0.84). Factorial analysis showed the bi-factorial structure of the KVIQ, which was explained by visual=57.6% and kinesthetic=32.4%. Conclusions The results of the present study revealed that the KVIQ is a valid and reliable tool for assessing MI in MS patients. PMID:24271091

  19. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  20. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  1. Multi-objective optimization of GENIE Earth system models.

    PubMed

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  2. Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.

    2012-01-01

    Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…

  3. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.

    Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person`s identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed formore » discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm.« less

  5. The reliability of photoneutron cross sections for 90,91,92,94Zr

    NASA Astrophysics Data System (ADS)

    Varlamov, V. V.; Davydov, A. I.; Ishkhanov, B. S.; Orlin, V. N.

    2018-05-01

    Data on partial photoneutron reaction cross sections (γ,1n) and (γ,2n) for 90,91,92,94Zr obtained at Livermore (USA) and for 90Zr obtained at Saclay (France) were analyzed. Experimental data were obtained using quasimonoenergetic photon beams from the annihilation in flight of relativistic positrons. The method of photoneutron multiplicity sorting based on the neutron energy measuring was used to separate partial reactions. The research carried out is based on the objective of using the physical criteria of data reliability. The large systematic uncertainties were found in partial cross sections, since they do not satisfy those criteria. To obtain the reliable cross sections of the partial (γ,1n) and (γ,2n) and total (γ,1n) + (γ,2n) reactions on 90,91,92,94Zr and (γ,3n) reaction on 94Zr, the experimental-theoretical method was used. It is based on the experimental data for neutron yield cross section rather independent from the neutron multiplicity and theoretical equations of the combined photonucleon reaction model (CPNRM). Newly evaluated data are compared with experimental ones. The reasons of noticeable disagreements between those are discussed.

  6. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    PubMed Central

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  7. A 3-year prospective study of the effects of adjuvant treatments on cognition in women with early stage breast cancer

    PubMed Central

    Jenkins, V; Shilling, V; Deutsch, G; Bloomfield, D; Morris, R; Allan, S; Bishop, H; Hodson, N; Mitra, S; Sadler, G; Shah, E; Stein, R; Whitehead, S; Winstanley, J

    2006-01-01

    The neuropsychological performance of 85 women with early stage breast cancer scheduled for chemotherapy, 43 women scheduled for endocrine therapy and/or radiotherapy and 49 healthy control subjects was assessed at baseline (T1), postchemotherapy (or 6 months) (T2) and at 18 months (T3). Repeated measures analysis found no significant interactions or main effect of group after controlling for age and intelligence. Using a calculation to examine performance at an individual level, reliable decline on multiple tasks was seen in 20% of chemotherapy patients, 26% of nonchemotherapy patients and 18% of controls at T2 (18%, 14 and 11%, respectively, at T3). Patients who had experienced a treatment-induced menopause were more likely to show reliable decline on multiple measures at T2 (OR=2.6, 95% confidence interval (CI) 0.823–8.266 P=0.086). Psychological distress, quality of life measures and self-reported cognitive failures did not impact on objective tests of cognitive function, but were significantly associated with each other. The results show that a few women experienced objective measurable change in their concentration and memory following standard adjuvant therapy, but the majority were either unaffected or even improve over time. PMID:16523200

  8. An inexact chance-constrained programming model for water quality management in Binhai New Area of Tianjin, China.

    PubMed

    Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R

    2011-04-15

    In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Constrained multiple indicator kriging using sequential quadratic programming

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Erhan Tercan, A.

    2012-11-01

    Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.

  10. Robust fuzzy control subject to state variance and passivity constraints for perturbed nonlinear systems with multiplicative noises.

    PubMed

    Chang, Wen-Jer; Huang, Bo-Jyun

    2014-11-01

    The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Prioritization of Stockpile Maintenance with Layered Pareto Fronts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Sarah E.; Anderson-Cook, Christine M.; Lu, Lu

    Difficult choices are required for a decision-making process where resources and budgets are increasingly constrained. This study demonstrates a structured decision-making approach using layered Pareto fronts to identify priorities about how to allocate funds between munitions stockpiles based on their estimated reliability, the urgency of needing available units, and the consequences if adequate numbers of units are not available. This case study, while specific to the characteristics of a group of munitions stockpiles, illustrates the general process of structured decision-making based on first identifying appropriate metrics that summarize the important dimensions of the decision, and then objectively eliminating non-contenders frommore » further consideration. Finally, the final subjective stage incorporates user priorities to select the four stockpiles to receive additional maintenance and surveillance funds based on understanding the trade-offs and robustness to various user priorities.« less

  12. Prioritization of Stockpile Maintenance with Layered Pareto Fronts

    DOE PAGES

    Burke, Sarah E.; Anderson-Cook, Christine M.; Lu, Lu; ...

    2017-10-11

    Difficult choices are required for a decision-making process where resources and budgets are increasingly constrained. This study demonstrates a structured decision-making approach using layered Pareto fronts to identify priorities about how to allocate funds between munitions stockpiles based on their estimated reliability, the urgency of needing available units, and the consequences if adequate numbers of units are not available. This case study, while specific to the characteristics of a group of munitions stockpiles, illustrates the general process of structured decision-making based on first identifying appropriate metrics that summarize the important dimensions of the decision, and then objectively eliminating non-contenders frommore » further consideration. Finally, the final subjective stage incorporates user priorities to select the four stockpiles to receive additional maintenance and surveillance funds based on understanding the trade-offs and robustness to various user priorities.« less

  13. The welfare effects of integrating renewable energy into electricity markets

    NASA Astrophysics Data System (ADS)

    Lamadrid, Alberto J.

    The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.

  14. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  15. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    PubMed

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; p<0.05) and contralesional hemispheres (0.94-1.00; p<0.0001), for CST and SLF fibers; and excellent reliability for all metrics in callosal fibers (0.85-1.00; p<0.05). ICC ranged from poor to excellent for tract number and tract volume in ipsilesional (-0.11 to 0.92; p≤0.57) and contralesional hemispheres (-0.27 to 0.93; p≤0.64), for CST and SLF fibers. Like other select DW-MRI approaches, CSD-based tractography is a reliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  17. Interpreting the Strongly Lensed Supernova iPTF16geu: Time Delay Predictions, Microlensing, and Lensing Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, Anupreeta; Oguri, Masamune; More, Surhud

    2017-02-01

    We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the Hubble Space Telescope F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hr reported by Goobar et al. but placesmore » a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. However, the observed total magnification of iPTF16geu is larger than expected, given its redshift. This may be a further indication of ongoing microlensing in this system.« less

  18. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  19. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  20. A pragmatic decision model for inventory management with heterogeneous suppliers

    NASA Astrophysics Data System (ADS)

    Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa

    2018-05-01

    For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.

  1. Online anomaly detection in wireless body area networks for reliable healthcare monitoring.

    PubMed

    Salem, Osman; Liu, Yaning; Mehaoua, Ahmed; Boutaba, Raouf

    2014-09-01

    In this paper, we propose a lightweight approach for online detection of faulty measurements by analyzing the data collected from medical wireless body area networks. The proposed framework performs sequential data analysis using a smart phone as a base station, and takes into account the constrained resources of the smart phone, such as processing power and storage capacity. The main objective is to raise alarms only when patients enter in an emergency situation, and to discard false alarms triggered by faulty measurements or ill-behaved sensors. The proposed approach is based on the Haar wavelet decomposition, nonseasonal Holt-Winters forecasting, and the Hampel filter for spatial analysis, and on for temporal analysis. Our objective is to reduce false alarms resulting from unreliable measurements and to reduce unnecessary healthcare intervention. We apply our proposed approach on real physiological dataset. Our experimental results prove the effectiveness of our approach in achieving good detection accuracy with a low false alarm rate. The simplicity and the processing speed of our proposed framework make it useful and efficient for real time diagnosis.

  2. Reliability of psychophysiological responses across multiple motion sickness stimulation tests

    NASA Technical Reports Server (NTRS)

    Stout, C. S.; Toscano, W. B.; Cowings, P. S.

    1995-01-01

    Although there is general agreement that a high degree of variability exists between subjects in their autonomic nervous system responses to motion sickness stimulation, very little evidence exists that examines the reproducibility of autonomic responses within subjects during motion sickness stimulation. Our objectives were to examine the reliability of autonomic responses and symptom levels across five testing occasions using the (1) final minute of testing, (2) change in autonomic response and the change in symptom level, and (3) strength of the relationship between the change in symptom level and the change in autonomic responses across the entire motion sickness test. The results indicate that, based on the final minute of testing, the autonomic responses of heart rate, blood volume pulse, and respiration rate are moderately stable across multiple tests. Changes in heart rate, blood volume pulse, respiration rate, and symptoms throughout the test duration are less stable across the tests. Finally, autonomic responses and symptom levels are significantly related across the entire motion sickness test.

  3. The Spatial Distribution of Attention within and across Objects

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2011-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a “grouped array”). Using a modified version of the Egly et al. object cuing task, we systematically manipulated within-object distance and object boundaries. Four major findings are reported: 1) spatial attention forms a gradient across the attended object; 2) object boundaries limit the distribution of this gradient, with the spread of attention constrained by a boundary; 3) boundaries within an object operate similarly to across-object boundaries: we observed object-based effects across a discontinuity within a single object, without the demand to divide or switch attention between discrete object representations; and 4) the gradient of spatial attention across an object directly modulates perceptual sensitivity, implicating a relatively early locus for the grouped array representation. PMID:21728455

  4. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  5. A new assessment tool for patients with multiple sclerosis from Spanish-speaking countries: validation of the Brief International Cognitive Assessment for MS (BICAMS) in Argentina.

    PubMed

    Vanotti, Sandra; Smerbeck, Audrey; Benedict, Ralph H B; Caceres, Fernando

    2016-10-01

    The Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) is an international assessment tool for monitoring cognitive function in multiple sclerosis (MS) patients. BICAMS comprises the Symbol Digit Modalities Test (SDMT), the California Verbal Learning Test - Second Edition (CVLT II) and the Brief Visuospatial Memory Test - Revised (BVMT-R). Our objective was to validate and assess the reliability of BICAMS as applied in Argentina and to obtain normative data in Spanish for this population. The sample composed of 50 MS patients and 100 healthy controls (HC). In order to test its reliability, BICAMS was re-administered in a subset of 25 patients. The sample's average age was 43.42 ± 10.17 years old, and average years of schooling were 14.86 ± 2.78. About 74% of the participants were women. The groups did not differ in age, years of schooling, or gender. The MS group performed significantly worse than the HC group across the three neuropsychological tests, yielding the following Cohen's d values: SDMT: .85; CVLT I: .87; and BVMT-R: .40. The mean raw scores for Argentina normative data were as follows: SDMT: 56.71 ± 10.85; CVLT I: 60.88 ± 10.46; and BVMT-R: 23.44 ± 5.84. Finally, test-retest reliability coefficients for each test were as follows: SDMT: r = .95; CVLT I: r = .87; and BVMT-R: r = .82. This BICAMS version is reliable and useful as a monitoring tool for identifying MS patients with cognitive impairment.

  6. Constraining Multiple Grammars

    ERIC Educational Resources Information Center

    Hopp, Holger

    2014-01-01

    This article offers the author's commentary on the Multiple Grammars (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper in the present issue. Multiple Grammars advances the claim that optionality is a constitutive characteristic of any one grammar, with interlanguage grammars being perhaps the clearest examples of a…

  7. Tag-to-Tag Interference Suppression Technique Based on Time Division for RFID.

    PubMed

    Khadka, Grishma; Hwang, Suk-Seung

    2017-01-01

    Radio-frequency identification (RFID) is a tracking technology that enables immediate automatic object identification and rapid data sharing for a wide variety of modern applications using radio waves for data transmission from a tag to a reader. RFID is already well established in technical areas, and many companies have developed corresponding standards and measurement techniques. In the construction industry, effective monitoring of materials and equipment is an important task, and RFID helps to improve monitoring and controlling capabilities, in addition to enabling automation for construction projects. However, on construction sites, there are many tagged objects and multiple RFID tags that may interfere with each other's communications. This reduces the reliability and efficiency of the RFID system. In this paper, we propose an anti-collision algorithm for communication between multiple tags and a reader. In order to suppress interference signals from multiple neighboring tags, the proposed algorithm employs the time-division (TD) technique, where tags in the interrogation zone are assigned a specific time slot so that at every instance in time, a reader communicates with tags using the specific time slot. We present representative computer simulation examples to illustrate the performance of the proposed anti-collision technique for multiple RFID tags.

  8. Multiply-Constrained Semantic Search in the Remote Associates Test

    ERIC Educational Resources Information Center

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  9. Free energy from molecular dynamics with multiple constraints

    NASA Astrophysics Data System (ADS)

    den Otter, W. K.; Briels, W. J.

    In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.

  10. Validity of an Observation Method for Assessing Pain Behavior in Individuals With Multiple Sclerosis

    PubMed Central

    Cook, Karon F.; Roddey, Toni S.; Bamer, Alyssa M.; Amtmann, Dagmar; Keefe, Francis J

    2012-01-01

    Context Pain is a common and complex experience for individuals who live with multiple sclerosis (MS) that interferes with physical, psychological and social function. A valid and reliable tool for quantifying observed pain behaviors in MS is critical to understanding how pain behaviors contribute to pain-related disability in this clinical population. Objectives To evaluate the reliability and validity of a pain behavioral observation protocol in individuals who have MS. Methods Community-dwelling volunteers with multiple sclerosis (N=30), back pain (N=5), or arthritis (N=8) were recruited based on clinician referrals, advertisements, fliers, web postings, and participation in previous research. Participants completed measures of pain severity, pain interference, and self-reported pain behaviors and were videotaped doing typical activities (e.g., walking, sitting). Two coders independently recorded frequencies of pain behaviors by category (e.g., guarding, bracing) and inter-rater reliability statistics were calculated. Naïve observers reviewed videotapes of individuals with MS and rated their pain. Spearman correlations were calculated between pain behavior frequencies and self-reported pain and pain ratings by naïve observers. Results Inter-rater reliability estimates indicated the reliability of pain codes in the MS sample. Kappa coefficients ranged from moderate agreement (sighing = 0.40) to substantial agreement (guarding = 0.83). These values were comparable to those obtained in the combined back pain and arthritis sample. Concurrent validity was supported by correlations with self-reported pain (0.46-0.53) and with self-reports of pain behaviors (0.58). Construct validity was supported by finding of 0.87 correlation between total pain behaviors observed by coders and mean pain ratings by naïve observers. Conclusion Results support use of the pain behavior observation protocol for assessing pain behaviors of individuals with MS. Valid assessments of pain behaviors of individuals with MS in could lead to creative interventions in the management of chronic pain in this population. PMID:23159684

  11. Experimental and evaluated photoneutron cross sections for 197Au

    NASA Astrophysics Data System (ADS)

    Varlamov, V.; Ishkhanov, B.; Orlin, V.

    2017-10-01

    There is a serious well-known problem of noticeable disagreements between the partial photoneutron cross sections obtained in various experiments. Such data were mainly determined using quasimonoenergetic annihilation photon beams and the method of neutron multiplicity sorting at Lawrence Livermore National Laboratory (USA) and Centre d'Etudes Nucleaires of Saclay (France). The analysis of experimental cross sections employing new objective physical data reliability criteria has shown that many of those are not reliable. The IAEA Coordinated Research Project (CRP) on photonuclear data evaluation was approved. The experimental and previously evaluated cross sections of the partial photoneutron reactions (γ ,1 n ) and (γ ,2 n ) on 197Au were analyzed using the new data reliability criteria. The data evaluated using the new experimental-theoretical method noticeably differ from both experimental data and data previously evaluated using nuclear modeling codes gnash, gunf, alice-f, and others. These discrepancies needed to be resolved.

  12. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE PAGES

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.; ...

    2017-02-10

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  13. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  14. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  15. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  16. Research pressure instrumentation for NASA space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Anderson, P. J.; Nussbaum, P.; Gustafson, G.

    1985-01-01

    The breadboard feasibility model of a silicon piezoresistive pressure transducer suitable for space shuttle main engine (SSME) applications was demonstrated. The development of pressure instrumentation for the SSME was examined. The objective is to develop prototype pressure transducers which are targeted to meet the SSME performance design goals and to fabricate, test and deliver a total of 10 prototype units. Effective utilization of the many advantages of silicon piezoresistive strain sensing technology to achieve the objectives of advanced state-of-the-art pressure sensors for reliability, accuracy and ease of manufacture is analyzed. Integration of multiple functions on a single chip is the key attribute of the technology.

  17. Reliable inference of light curve parameters in the presence of systematics

    NASA Astrophysics Data System (ADS)

    Gibson, Neale P.

    2016-10-01

    Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.

  18. MRMaid, the web-based tool for designing multiple reaction monitoring (MRM) transitions.

    PubMed

    Mead, Jennifer A; Bianco, Luca; Ottone, Vanessa; Barton, Chris; Kay, Richard G; Lilley, Kathryn S; Bond, Nicholas J; Bessant, Conrad

    2009-04-01

    Multiple reaction monitoring (MRM) of peptides uses tandem mass spectrometry to quantify selected proteins of interest, such as those previously identified in differential studies. Using this technique, the specificity of precursor to product transitions is harnessed for quantitative analysis of multiple proteins in a single sample. The design of transitions is critical for the success of MRM experiments, but predicting signal intensity of peptides and fragmentation patterns ab initio is challenging given existing methods. The tool presented here, MRMaid (pronounced "mermaid") offers a novel alternative for rapid design of MRM transitions for the proteomics researcher. The program uses a combination of knowledge of the properties of optimal MRM transitions taken from expert practitioners and literature with MS/MS evidence derived from interrogation of a database of peptide identifications and their associated mass spectra. The tool also predicts retention time using a published model, allowing ordering of transition candidates. By exploiting available knowledge and resources to generate the most reliable transitions, this approach negates the need for theoretical prediction of fragmentation and the need to undertake prior "discovery" MS studies. MRMaid is a modular tool built around the Genome Annotating Proteomic Pipeline framework, providing a web-based solution with both descriptive and graphical visualizations of transitions. Predicted transition candidates are ranked based on a novel transition scoring system, and users may filter the results by selecting optional stringency criteria, such as omitting frequently modified residues, constraining the length of peptides, or omitting missed cleavages. Comparison with published transitions showed that MRMaid successfully predicted the peptide and product ion pairs in the majority of cases with appropriate retention time estimates. As the data content of the Genome Annotating Proteomic Pipeline repository increases, the coverage and reliability of MRMaid are set to increase further. MRMaid is freely available over the internet as an executable web-based service at www.mrmaid.info.

  19. MRMaid, the Web-based Tool for Designing Multiple Reaction Monitoring (MRM) Transitions*

    PubMed Central

    Mead, Jennifer A.; Bianco, Luca; Ottone, Vanessa; Barton, Chris; Kay, Richard G.; Lilley, Kathryn S.; Bond, Nicholas J.; Bessant, Conrad

    2009-01-01

    Multiple reaction monitoring (MRM) of peptides uses tandem mass spectrometry to quantify selected proteins of interest, such as those previously identified in differential studies. Using this technique, the specificity of precursor to product transitions is harnessed for quantitative analysis of multiple proteins in a single sample. The design of transitions is critical for the success of MRM experiments, but predicting signal intensity of peptides and fragmentation patterns ab initio is challenging given existing methods. The tool presented here, MRMaid (pronounced “mermaid”) offers a novel alternative for rapid design of MRM transitions for the proteomics researcher. The program uses a combination of knowledge of the properties of optimal MRM transitions taken from expert practitioners and literature with MS/MS evidence derived from interrogation of a database of peptide identifications and their associated mass spectra. The tool also predicts retention time using a published model, allowing ordering of transition candidates. By exploiting available knowledge and resources to generate the most reliable transitions, this approach negates the need for theoretical prediction of fragmentation and the need to undertake prior “discovery” MS studies. MRMaid is a modular tool built around the Genome Annotating Proteomic Pipeline framework, providing a web-based solution with both descriptive and graphical visualizations of transitions. Predicted transition candidates are ranked based on a novel transition scoring system, and users may filter the results by selecting optional stringency criteria, such as omitting frequently modified residues, constraining the length of peptides, or omitting missed cleavages. Comparison with published transitions showed that MRMaid successfully predicted the peptide and product ion pairs in the majority of cases with appropriate retention time estimates. As the data content of the Genome Annotating Proteomic Pipeline repository increases, the coverage and reliability of MRMaid are set to increase further. MRMaid is freely available over the internet as an executable web-based service at www.mrmaid.info. PMID:19011259

  20. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  1. Diagnostics of models and observations in the contexts of exoplanets, brown dwarfs, and very low-mass stars.

    NASA Astrophysics Data System (ADS)

    Kopytova, Taisiya

    2016-01-01

    When studying isolated brown dwarfs and directly imaged exoplanets with insignificant orbital motion,we have to rely on theoretical models to determine basic parameters such as mass, age, effective temperature, and surface gravity.While stellar and atmospheric models are rapidly evolving, we need a powerful tool to test and calibrate them.In my thesis, I focussed on comparing interior and atmospheric models with observational data, in the effort of taking into account various systematic effects that can significantly influence the data analysis.As a first step, about 460 candidate member os the Hyades were screened for companions using diffraction limited imaging observation (both our own data and archival data). As a result I could establish the single star sequence for the Hyades comprising about 250 stars (Kopytova et al. 2015, accepted to A&A). Open clusters contain many coeval objects of the same chemical composition and age, and spanning a range of masses. We compare the obtained sequence with a set of theoretical isochrones identifying systematic offsets and revealing probable issues in the models.However, there are many cases when it is impossible to test models before comparing them with observations.As a second step, we apply atmospheric models for constraining parameters of WISE 0855-07, the coolest known Y dwarf(Kopytova et al. 2014, ApJ 797, 3). We demonstrate the limits of constraining effective temperature and the presence/absence of water clouds.As a third step, we introduce a novel method to take into account the above-mentioned systematics. We construct a "systematics vector" that allows us to reveal problematic wavelength ranges when fitting atmospheric models to observed near-infrared spectraof brown dwarfs and exoplanets (Kopytova et al., in prep.). This approach plays a crucial role when retrieving abundances for these objects, in particularly, a C/O ratio. The latter parameter is an important key to formation scenarios of brown dwarf and exoplanets. We show the way to constrain a C/O ratio while eliminating systematics effects, which significantly improves the reliability of a final result and our conclusions about formation history of certain exoplanets and brown dwarfs.

  2. Convergent and invariant object representations for sight, sound, and touch.

    PubMed

    Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T

    2015-09-01

    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. © 2015 Wiley Periodicals, Inc.

  3. Two-dimensional and 3-D images of thick tissue using time-constrained times-of-flight and absorbance spectrophotometry

    NASA Astrophysics Data System (ADS)

    Benaron, David A.; Lennox, M.; Stevenson, David K.

    1992-05-01

    Reconstructing deep-tissue images in real time using spectrophotometric data from optically diffusing thick tissues has been problematic. Continuous wave applications (e.g., pulse oximetry, regional cerebral saturation) ignore both the multiple paths traveled by the photons through the tissue and the effects of scattering, allowing scalar measurements but only under limited conditions; interferometry works poorly in thick, highly-scattering media; frequency- modulated approaches may not allow full deconvolution of scattering and absorbance; and pulsed-light techniques allow for preservation of information regarding the multiple paths taken by light through the tissue, but reconstruction is both computation intensive and limited by the relative surface area available for detection of photons. We have developed a picosecond times-of-flight and absorbance (TOFA) optical system, time-constrained to measure only photons with a narrow range of path lengths and arriving within a narrow angel of the emitter-detector axis. The delay until arrival of the earliest arriving photons is a function of both the scattering and absorbance of the tissues in a direct line between the emitter and detector, reducing the influence of surrounding tissues. Measurement using a variety of emitter and detector locations produces spatial information which can be analyzed in a standard 2-D grid, or subject to computer reconstruction to produce tomographic images representing 3-D structure. Using such a technique, we have been able to demonstrate the principles of tc-TOFA, detect and localize diffusive and/or absorptive objects suspended in highly scattering media (such as blood admixed with yeast), and perform simple 3-D reconstructions using phantom objects. We are now attempting to obtain images in vivo. Potential future applications include use as a research tool, and as a continuous, noninvasive, nondestructive monitor in diagnostic imaging, fetal monitoring, neurologic and cardiac assessment. The technique may lead to real-time optical imaging and quantitation of tissues oxygen delivery.

  4. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    PubMed

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  5. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  6. NDE reliability and probability of detection (POD) evolution and paradigm shift

    NASA Astrophysics Data System (ADS)

    Singh, Surendra

    2014-02-01

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.

  7. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    PubMed

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking

    PubMed Central

    Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun

    2017-01-01

    Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876

  9. Hip range of motion and provocative physical examination tests reliability and agreement in asymptomatic volunteers

    PubMed Central

    Prather, H; Harris-Hayes, M; Hunt, D; Steger-May, K; Mathew, V; Clohisy, JC

    2012-01-01

    Objective The objectives of this study are the following: 1) report passive hip ROM in asymptomatic young adults, 2) report the intra-tester and inter-tester reliability of hip ROM measurements among testers of multiple disciplines, 3) report the results of provocative hip tests and tester agreement. Design descriptive epidemiology study Setting tertiary university Participants Twenty-eight young adult volunteers without musculoskeletal symptoms, history of disorder or surgery involving the lumbar spine or lower extremities were enrolled and completed the study. Methods Asymptomatic young adult volunteers completed questionnaires and were examined by two blinded examiners during a single session. The testers were physical therapists and physicians. Hip range of motion and provocative tests were completed by both examiners on each hip. Main Outcome Measurements Inter and intra-rater reliability for ROM and agreement for provocative tests was determined. Results Twenty-eight asymptomatic adults with mean age 31 years old (range 18–51 years) and mean modified Harris Hip Score of 99.5 ± 1.5 and UCLA Activity score of 8.8 ± 1.2 completed the study. Intra-rater agreement was excellent for all hip range of motion measurements, with intraclass correlation coefficients (ICCs) ranging from 0.76 to 0.97 with similar agreement if the examiner was a physical therapist or a physician. Excellent inter-rater reliability was found for hip flexion ICC 0.87 (95% CI 0.78 to 0.92), supine internal rotation ICC 0.75 (95% CI 0.60 to 0.84) and prone internal rotation ICC 0.79 (95% CI 0.66 to 0.87). The least reliable measurements were supine hip abduction (ICC 0.34) and supine external rotation (ICC 0.18). Agreement between examiners ranged from 96–100% for provocative hip tests which included the hip impingement, resisted straight leg raise, FABER/Patrick’s and log roll tests. Conclusions Specific hip ROM measures show excellent inter-rater reliability and provocative hip tests show good agreement among multiple examiners and medical disciplines. Further studies are needed to assess the utilization of these measurements and tests as a part of a hip screening examination to assess for young adults at risk intra-articular hip disorders prior to the onset of degenerative changes. PMID:20970757

  10. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  11. New convergence results for the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Prato, M.

    2015-09-01

    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.

  12. Individual strategy ratings improve the control for task difficulty effects in arithmetic problem solving paradigms.

    PubMed

    Tschentscher, Nadja; Hauk, Olaf

    2015-01-01

    Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants' strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research.

  13. Individual strategy ratings improve the control for task difficulty effects in arithmetic problem solving paradigms

    PubMed Central

    Tschentscher, Nadja; Hauk, Olaf

    2015-01-01

    Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants’ strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research. PMID:26321997

  14. Multiple-energy Techniques in Industrial Computerized Tomography

    DOE R&D Accomplishments Database

    Schneberk, D.; Martz, H.; Azevedo, S.

    1990-08-01

    Considerable effort is being applied to develop multiple-energy industrial CT techniques for materials characterization. Multiple-energy CT can provide reliable estimates of effective Z (Z{sub eff}), weight fraction, and rigorous calculations of absolute density, all at the spatial resolution of the scanner. Currently, a wide variety of techniques exist for CT scanners, but each has certain problems and limitations. Ultimately, the best multi-energy CT technique would combine the qualities of accuracy, reliability, and wide range of application, and would require the smallest number of additional measurements. We have developed techniques for calculating material properties of industrial objects that differ somewhat from currently used methods. In this paper, we present our methods for calculating Z{sub eff}, weight fraction, and density. We begin with the simplest case -- methods for multiple-energy CT using isotopic sources -- and proceed to multiple-energy work with x-ray machine sources. The methods discussed here are illustrated on CT scans of PBX-9502 high explosives, a lexan-aluminum phantom, and a cylinder of glass beads used in a preliminary study to determine if CT can resolve three phases: air, water, and a high-Z oil. In the CT project at LLNL, we have constructed several CT scanners of varying scanning geometries using {gamma}- and x-ray sources. In our research, we employed two of these scanners: pencil-beam CAT for CT data using isotopic sources and video-CAT equipped with an IRT micro-focal x-ray machine source.

  15. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  16. Research pressure instrumentation for NASA Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Anderson, P. J.; Nussbaum, P.; Gustafson, G.

    1984-01-01

    The development of prototype pressure transducers which are targeted to meet the Space Shuttle Main Engine SSME performance design goals is discussed. The fabrication, testing and delivery of 10 prototype units is examined. Silicon piezoresistive strain sensing technology is used to achieve the objectives of advanced state-of-the-art pressure sensors in terms of reliability, accuracy and ease of manufacture. Integration of multiple functions on a single chip is the key attribute of this technology.

  17. Neural Dynamics of Multiple Object Processing in Mild Cognitive Impairment and Alzheimer's Disease: Future Early Diagnostic Biomarkers?

    PubMed

    Bagattini, Chiara; Mazza, Veronica; Panizza, Laura; Ferrari, Clarissa; Bonomini, Cristina; Brignani, Debora

    2017-01-01

    The aim of this study was to investigate the behavioral and electrophysiological dynamics of multiple object processing (MOP) in mild cognitive impairment (MCI) and Alzheimer's disease (AD), and to test whether its neural signatures may represent reliable diagnostic biomarkers. Behavioral performance and event-related potentials [N2pc and contralateral delay activity (CDA)] were measured in AD, MCI, and healthy controls during a MOP task, which consisted in enumerating a variable number of targets presented among distractors. AD patients showed an overall decline in accuracy for both small and large target quantities, whereas in MCI patients, only enumeration of large quantities was impaired. N2pc, a neural marker of attentive individuation, was spared in both AD and MCI patients. In contrast, CDA, which indexes visual short term memory abilities, was altered in both groups of patients, with a non-linear pattern of amplitude modulation along the continuum of the disease: a reduction in AD and an increase in MCI. These results indicate that AD pathology shows a progressive decline in MOP, which is associated to the decay of visual short-term memory mechanisms. Crucially, CDA may be considered as a useful neural signature both to distinguish between healthy and pathological aging and to characterize the different stages along the AD continuum, possibly becoming a reliable candidate for an early diagnostic biomarker of AD pathology.

  18. A Tabu-Search Heuristic for Deterministic Two-Mode Blockmodeling of Binary Network Matrices.

    PubMed

    Brusco, Michael; Steinley, Douglas

    2011-10-01

    Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.

  19. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  20. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  1. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  2. Debris Object Orbit Initialization Using the Probabilistic Admissible Region with Asynchronous Heterogeneous Observations

    NASA Astrophysics Data System (ADS)

    Zaidi, W. H.; Faber, W. R.; Hussein, I. I.; Mercurio, M.; Roscoe, C. W. T.; Wilkins, M. P.

    One of the most challenging problems in treating space debris is the characterization of the orbit of a newly detected and uncorrelated measurement. The admissible region is defined as the set of physically acceptable orbits (i.e. orbits with negative energies) consistent with one or more measurements of a Resident Space Object (RSO). Given additional constraints on the orbital semi-major axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a Probabilistic Admissible Region (PAR), a concept introduced in 2014 as a Monte Carlo uncertainty representation approach using topocentric spherical coordinates. Ultimately, a PAR can be used to initialize a sequential Bayesian estimator and to prioritize orbital propagations in a multiple hypothesis tracking framework such as Finite Set Statistics (FISST). To date, measurements used to build the PAR have been collected concurrently and by the same sensor. In this paper, we allow measurements to have different time stamps. We also allow for non-collocated sensor collections; optical data can be collected by one sensor at a given time and radar data collected by another sensor located elsewhere. We then revisit first principles to link asynchronous optical and radar measurements using both the conservation of specific orbital energy and specific orbital angular momentum. The result from the proposed algorithm is an implicit-Bayesian and non-Gaussian representation of orbital state uncertainty.

  3. Reliability analysis and utilization of PEMs in space application

    NASA Astrophysics Data System (ADS)

    Jiang, Xiujie; Wang, Zhihua; Sun, Huixian; Chen, Xiaomin; Zhao, Tianlin; Yu, Guanghua; Zhou, Changyi

    2009-11-01

    More and more plastic encapsulated microcircuits (PEMs) are used in space missions to achieve high performance. Since PEMs are designed for use in terrestrial operating conditions, the successful usage of PEMs in space harsh environment is closely related to reliability issues, which should be considered firstly. However, there is no ready-made methodology for PEMs in space applications. This paper discusses the reliability for the usage of PEMs in space. This reliability analysis can be divided into five categories: radiation test, radiation hardness, screening test, reliability calculation and reliability assessment. One case study is also presented to illuminate the details of the process, in which a PEM part is used in a joint space program Double-Star Project between the European Space Agency (ESA) and China. The influence of environmental constrains including radiation, humidity, temperature and mechanics on the PEM part has been considered. Both Double-Star Project satellites are still running well in space now.

  4. Approximation of reliabilities for multiple-trait model with maternal effects.

    PubMed

    Strabel, T; Misztal, I; Bertrand, J K

    2001-04-01

    Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gair, Jonathan R.; Tang, Christopher; Volonteri, Marta

    One of the sources of gravitational waves for the proposed space-based gravitational wave detector, the Laser Interferometer Space Antenna (LISA), are the inspirals of compact objects into supermassive black holes in the centers of galaxies--extreme-mass-ratio inspirals (EMRIs). Using LISA observations, we will be able to measure the parameters of each EMRI system detected to very high precision. However, the statistics of the set of EMRI events observed by LISA will be more important in constraining astrophysical models than extremely precise measurements for individual systems. The black holes to which LISA is most sensitive are in a mass range that ismore » difficult to probe using other techniques, so LISA provides an almost unique window onto these objects. In this paper we explore, using Bayesian techniques, the constraints that LISA EMRI observations can place on the mass function of black holes at low redshift. We describe a general framework for approaching inference of this type--using multiple observations in combination to constrain a parametrized source population. Assuming that the scaling of the EMRI rate with the black-hole mass is known and taking a black-hole distribution given by a simple power law, dn/dlnM=A{sub 0}(M/M{sub *}){sup {alpha}}{sub 0}, we find that LISA could measure the parameters to a precision of {Delta}(lnA{sub 0}){approx}0.08, and {Delta}({alpha}{sub 0}){approx}0.03 for a reference model that predicts {approx}1000 events. Even with as few as 10 events, LISA should constrain the slope to a precision {approx}0.3, which is the current level of observational uncertainty in the low-mass slope of the black-hole mass function. We also consider a model in which A{sub 0} and {alpha}{sub 0} evolve with redshift, but find that EMRI observations alone do not have much power to probe such an evolution.« less

  6. Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis

    ERIC Educational Resources Information Center

    Juslin, Peter; Karlsson, Linnea; Olsson, Henrik

    2008-01-01

    There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…

  7. Evaluating the Effect of Minimizing Screws on Stabilization of Symphysis Mandibular Fracture by 3D Finite Element Analysis.

    PubMed

    Kharmanda, Ghias; Kharma, Mohamed-Yaser

    2017-06-01

    The objective of this work is to integrate structural optimization and reliability concepts into mini-plate fixation strategy used in symphysis mandibular fractures. The structural reliability levels are next estimated when considering a single failure mode and multiple failure modes. A 3-dimensional finite element model is developed in order to evaluate the ability of reducing the negative effect due to the stabilization of the fracture. Topology optimization process is considered in the conceptual design stage to predict possible fixation layouts. In the detailed design stage, suitable mini-plates are selected taking into account the resulting topology and different anatomical considerations. Several muscle forces are considered in order to obtain realistic predictions. Since some muscles can be cut or harmed during the surgery and cannot operate at its maximum capacity, there is a strong motivation to introduce the loading uncertainties in order to obtain reliable designs. The structural reliability is carried out for a single failure mode and multiple failure modes. The different results are validated with a clinical case of a male patient with symphysis fracture. In this case while use of the upper plate fixation with four holes, only two screws were applied to protect adjacent vital structure. This behavior does not affect the stability of the fracture. The proposed strategy to optimize bone plates leads to fewer complications and second surgeries, less patient discomfort, and shorter time of healing.

  8. Multiple Asteroid Systems: Dimensions and Thermal Properties from Spitzer Space Telescope and Ground-based Observations

    NASA Technical Reports Server (NTRS)

    Marchis, F.; Enriquez, J. E.; Emery, J. P.; Mueller, M.; Baek, M.; Pollock, J.; Assafin, M.; Matins, R. Vieira; Berthier, J.; Vachier, F.; hide

    2012-01-01

    We collected mid-IR spectra from 5.2 to 38 microns using the Spitzer Space Telescope Infrared Spectrograph of 28 asteroids representative of all established types of binary groups. Photometric light curves were also obtained for 14 of them during the Spitzer observations to provide the context of the observations and reliable estimates of their absolute magnitudes. The extracted mid-IR spectra were analyzed using a modified standard thermal model (STM) and a thermophysical model (TPM) that takes into account the shape and geometry of the large primary at the time of the Spitzer observation. We derived a reliable estimate of the size, albedo, and beaming factor for each of these asteroids, representing three main taxonomic groups: C, S, and X. For large (volume-equivalent system diameter Deq > 130 km) binary asteroids, the TPM analysis indicates a low thermal inertia (Lambda < or = approx.100 J/1/2 s/K/sq m2) and their emissivity spectra display strong mineral features, implying that they are covered with a thick layer of thermally insulating regolith. The smaller (surface-equivalent system diameter Deff < 17 km) asteroids also show some emission lines of minerals, but they are significantly weaker, consistent with regoliths with coarser grains, than those of the large binary asteroids. The average bulk densities of these multiple asteroids vary from 0.7-1.7 g/cu cm (P-, C-type) to approx. 2 g/cu cm (S-type). The highest density is estimated for the M-type (22) Kalliope (3.2 +/- 0.9 g/cu cm). The spectral energy distributions (SEDs) and emissivity spectra, made available as a supplement document, could help to constrain the surface compositions of these asteroids.

  9. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  10. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.

  11. Multiwavelength Observations of Volatiles in Comets

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Charnley, Steven B.; Kuan, Yi-Jehng; Chuang, Yo-Ling; DiSanti, Michael A.; Bonev, Boncho P.; Remijan, Anthony J.

    2011-01-01

    Recently, there have been complimentary observations from multiple facilities to try to unravel the chemical complexity of comets. Incorporating results from various techniques, including: single-dish millimeter wavelength observations, interferometers, and/or IR spectroscopy, one can gain further insight into the abundances, production rates, distributions, and formation mechanisms of molecules in these objects [I]. Such studies have provided great detail towards molecules with a-typical chemistries, such as H2CO [2]. We report spectral observations of C/2006 M4 (SWAN), C/2007 N3 (Lulin), and C/2009 RI (McNaught) with the Arizona Radio Observatory's SMT and 12-m telescopes, as well as the NRAO Greenbank telescope and IRTFCSHELL. Multiple parent volatiles (HCN, CH3OH, CO, CH4, C2H6, and H2O) plus two photodissociation products (CS and OH) have been detected in these objects. We will present a comparison of molecular abundances in these comets to those observed in others, supporting a long-term effort of building a comet taxonomy based on composition. Previous work has revealed a range of abundances of parent species (from "organics-poor" to "organics-rich") with respect to water among comets [3,4,5], however the statistics are stiII poorly constrained and interpretations of the observed compositional diversity are uncertain.

  12. Multiple positive normalized solutions for nonlinear Schrödinger systems

    NASA Astrophysics Data System (ADS)

    Gou, Tianxiang; Jeanjean, Louis

    2018-05-01

    We consider the existence of multiple positive solutions to the nonlinear Schrödinger systems set on , under the constraint Here are prescribed, , and the frequencies are unknown and will appear as Lagrange multipliers. Two cases are studied, the first when , the second when In both cases, assuming that is sufficiently small, we prove the existence of two positive solutions. The first one is a local minimizer for which we establish the compactness of the minimizing sequences and also discuss the orbital stability of the associated standing waves. The second solution is obtained through a constrained mountain pass and a constrained linking respectively.

  13. Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.

    PubMed

    Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee

    2017-07-01

    We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

  14. Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing

    PubMed Central

    Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J. G.

    2014-01-01

    Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0.8, matching the theoretical gain of 3/4 for two antennas on the rotating frame and a single antenna at the reference station. PMID:25036330

  15. Integrated GNSS attitude determination and positioning for direct geo-referencing.

    PubMed

    Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J G

    2014-07-17

    Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0:8, matching the theoretical gain of √ 3/4 for two antennas on the rotating frame and a single antenna at the reference station.

  16. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  17. Development of Probabilistic Life Prediction Methodologies and Testing Strategies for MEMS and CMC's

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2003-01-01

    This effort is to investigate probabilistic life prediction methodologies for ceramic matrix composites and MicroElectroMechanical Systems (MEMS) and to analyze designs that determine stochastic properties of MEMS. For CMC's this includes a brief literature survey regarding lifing methodologies. Also of interest for MEMS is the design of a proper test for the Weibull size effect in thin film (bulge test) specimens. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. A main objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures/Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads. A second set of objectives is to determine the applicability/suitability of the CARES/Life methodology for CMC analysis, what changes would be needed to the methodology and software, and if feasible, run a demonstration problem. Also important is an evaluation of CARES/Life coupled to the ANSYS Probabilistic Design System (PDS) and the potential of coupling transient reliability analysis to the ANSYS PDS.

  18. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    NASA Astrophysics Data System (ADS)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  19. The sizing of hamstring grafts for anterior cruciate reconstruction: intra- and inter-observer reliability.

    PubMed

    Dwyer, Tim; Whelan, Daniel B; Khoshbin, Amir; Wasserstein, David; Dold, Andrew; Chahal, Jaskarndip; Nauth, Aaron; Murnaghan, M Lucas; Ogilvie-Harris, Darrell J; Theodoropoulos, John S

    2015-04-01

    The objective of this study was to establish the intra- and inter-observer reliability of hamstring graft measurement using cylindrical sizing tubes. Hamstring tendons (gracilis and semitendinosus) were harvested from ten cadavers by a single surgeon and whip stitched together to create ten 4-strand hamstring grafts. Ten sports medicine surgeons and fellows sized each graft independently using either hollow cylindrical sizers or block sizers in 0.5-mm increments—the sizing technique used was applied consistently to each graft. Surgeons moved sequentially from graft to graft and measured each hamstring graft twice. Surgeons were asked to state the measured proximal (femoral) and distal (tibial) diameter of each graft, as well as the diameter of the tibial and femoral tunnels that they would drill if performing an anterior cruciate ligament (ACL) reconstruction using that graft. Reliability was established using intra-class correlation coefficients. Overall, both the inter-observer and intra-observer agreement were >0.9, demonstrating excellent reliability. The inter-observer reliability for drill sizes was also excellent (>0.9). Excellent correlation was seen between cylindrical sizing, and drill sizes (>0.9). Sizing of hamstring grafts by multiple surgeons demonstrated excellent intra-observer and intra-observer reliability, potentially validating clinical studies exploring ACL reconstruction outcomes by hamstring graft diameter when standard techniques are used. III.

  20. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  1. Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement

    NASA Astrophysics Data System (ADS)

    Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano

    2018-03-01

    The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.

  2. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldemir, Tunc; Denning, Richard; Catalyurek, Umit

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, suchmore » as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.« less

  3. Shielding Development for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Caffrey, Jarvis A.; Gomez, Carlos F.; Scharber, Luke L.

    2015-01-01

    Radiation shielding analysis and development for the Nuclear Cryogenic Propulsion Stage (NCPS) effort is currently in progress and preliminary results have enabled consideration for critical interfaces in the reactor and propulsion stage systems. Early analyses have highlighted a number of engineering constraints, challenges, and possible mitigating solutions. Performance constraints include permissible crew dose rates (shared with expected cosmic ray dose), radiation heating flux into cryogenic propellant, and material radiation damage in critical components. Design strategies in staging can serve to reduce radiation scatter and enhance the effectiveness of inherent shielding within the spacecraft while minimizing the required mass of shielding in the reactor system. Within the reactor system, shield design is further constrained by the need for active cooling with minimal radiation streaming through flow channels. Material selection and thermal design must maximize the reliability of the shield to survive the extreme environment through a long duration mission with multiple engine restarts. A discussion of these challenges and relevant design strategies are provided for the mitigation of radiation in nuclear thermal propulsion.

  4. The multiple sclerosis work difficulties questionnaire: translation and cross-cultural adaptation to Turkish and assessment of validity and reliability.

    PubMed

    Kahraman, Turhan; Özdoğar, Asiye Tuba; Honan, Cynthia Alison; Ertekin, Özge; Özakbaş, Serkan

    2018-05-09

    To linguistically and culturally adapt the Multiple Sclerosis Work Difficulties Questionnaire-23 (MSWDQ-23) for use in Turkey, and to examine its reliability and validity. Following standard forward-back translation of the MSWDQ-23, it was administered to 124 people with multiple sclerosis (MS). Validity was evaluated using related outcome measures including those related to employment status and expectations, disability level, fatigue, walking, and quality of life. Randomly selected participants were asked to complete the MSWDQ-23 again to assess test-retest reliability. Confirmatory factor analysis on the MSWDQ-23 demonstrated a good fit for the data, and the internal consistency of each subscale was excellent. The test-retest reliability for the total score, psychological/cognitive barriers, physical barriers, and external barriers subscales were high. The MSWDQ-23 and its subscales were positively correlated with the employment, disability level, walking, and fatigue outcome measures. This study suggests that the Turkish version of MSWDQ-23 has high reliability and adequate validity, and it can be used to determine the difficulties faced by people with multiple sclerosis in workplace. Moreover, the study provides evidence about the test-retest reliability of the questionnaire. Implications for rehabilitation Multiple sclerosis affects young people of working age. Understanding work-related problems is crucial to enhance people with multiple sclerosis likelihood of maintaining their job. The Multiple Sclerosis Work Difficulties Questionnaire-23 (MSWDQ-23) is a valid and reliable measure of perceived workplace difficulties in people with multiple sclerosis: we presented its validation to Turkish. Professionals working in the field of vocational rehabilitation may benefit from using the MSWDQ-23 to predict the current work outcomes and future employment expectations.

  5. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  6. Dynamic replanning on demand of UAS constellations performing ISR missions

    NASA Astrophysics Data System (ADS)

    Stouch, Daniel W.; Zeidman, Ernest; Callahan, William; McGraw, Kirk

    2011-05-01

    Unmanned aerial systems (UAS) have proven themselves to be indispensable in providing intelligence, surveillance, and reconnaissance (ISR) over the battlefield. Constellations of heterogeneous, multi-purpose UAS are being tasked to provide ISR in an unpredictable environment. This necessitates the dynamic replanning of critical missions as weather conditions change, new observation targets are identified, aircraft are lost or equipment malfunctions, and new airspace restrictions are introduced. We present a method to generate coordinated mission plans for constellations of UAS with multiple flight goals and potentially competing objectives, and update them on demand as the operational situation changes. We use a fast evolutionary algorithm-based, multi-objective optimization technique. The updated flight routes maintain continuity by considering where the ISR assets have already flown and where they still need to go. Both the initial planning and replanning take into account factors such as area of analysis coverage, restricted operating zones, maximum control station range, adverse weather effects, military terrain value, and sensor performance. Our results demonstrate that by constraining the space of potential solutions using an intelligently-formed air maneuver network with a subset of potential airspace corridors and navigational waypoints, we can ensure global optimization for multiple objectives considering the situation both before and after the replanning is initiated. We employ sophisticated visualization techniques using a geographic information system to help the user 'look under the hood" of the algorithms to understand the effectiveness and viability of the generated ISR mission plans and identify potential gaps in coverage.

  7. Space shuttle hypergolic bipropellant RCS engine design study, Bell model 8701

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research program was conducted to define the level of the current technology base for reaction control system rocket engines suitable for space shuttle applications. The project consisted of engine analyses, design, fabrication, and tests. The specific objectives are: (1) extrapolating current engine design experience to design of an RCS engine with required safety, reliability, performance, and operational capability, (2) demonstration of multiple reuse capability, and (3) identification of current design and technology deficiencies and critical areas for future effort.

  8. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  9. Development and validation of a visual grading scale for assessing image quality of AP pelvis radiographic images.

    PubMed

    Mraity, Hussien A A B; England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter

    2016-01-01

    The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.

  10. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  11. Intermittent dust mass loss from activated asteroid P/2013 P5 (PANSTARRS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno, F.; Pozuelos, F.; Licandro, J.

    We present observations and models of the dust environment of activated asteroid P/2013 P5 (PANSTARRS). The object displayed a complex morphology during the observations, with the presence of multiple tails. We combined our own observations, all made with instrumentation attached to the 10.4 m Gran Telescopio Canarias on La Palma, with previously published Hubble Space Telescope images to build a model aimed at fitting all the observations. Altogether, the data cover a full three month period of observations which can be explained by intermittent dust loss. The most plausible scenario is that of an asteroid rotating with the spinning axismore » oriented perpendicular to the orbit plane and losing mass from the equatorial region, consistent with rotational break-up. Assuming that the ejection velocity of the particles (v ∼ 0.02-0.05 m s{sup –1}) corresponds to the escape velocity, the object diameter is constrained to ∼30-130 m for bulk densities 3000-1000 kg m{sup –3}.« less

  12. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  13. What does music express? Basic emotions and beyond.

    PubMed

    Juslin, Patrik N

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.

  14. Inter-rater Reliability of Sustained Aberrant Movement Patterns as a Clinical Assessment of Muscular Fatigue

    PubMed Central

    Aerts, Frank; Carrier, Kathy; Alwood, Becky

    2016-01-01

    Background: The assessment of clinical manifestation of muscle fatigue is an effective procedure in establishing therapeutic exercise dose. Few studies have evaluated physical therapist reliability in establishing muscle fatigue through detection of changes in quality of movement patterns in a live setting. Objective: The purpose of this study is to evaluate the inter-rater reliability of physical therapists’ ability to detect altered movement patterns due to muscle fatigue. Design: A reliability study in a live setting with multiple raters. Participants: Forty-four healthy individuals (ages 19-35) were evaluated by six physical therapists in a live setting. Methods: Participants were evaluated by physical therapists for altered movement patterns during resisted shoulder rotation. Each participant completed a total of four tests: right shoulder internal rotation, right shoulder external rotation, left shoulder internal rotation and left shoulder external rotation. Results: For all tests combined, the inter-rater reliability for a single rater scoring ICC (2,1) was .65 (95%, .60, .71) This corresponds to moderate inter-rater reliability between physical therapists. Limitations: The results of this study apply only to healthy participants and therefore cannot be generalized to a symptomatic population. Conclusion: Moderate inter-rater reliability was found between physical therapists in establishing muscle fatigue through the observation of sustained altered movement patterns during dynamic resistive shoulder internal and external rotation. PMID:27347241

  15. Constrained tri-sphere kinematic positioning system

    DOEpatents

    Viola, Robert J

    2010-12-14

    A scalable and adaptable, six-degree-of-freedom, kinematic positioning system is described. The system can position objects supported on top of, or suspended from, jacks comprising constrained joints. The system is compatible with extreme low temperature or high vacuum environments. When constant adjustment is not required a removable motor unit is available.

  16. Reflections on How Color Term Acquisition Is Constrained

    ERIC Educational Resources Information Center

    Pitchford, Nicola J.

    2006-01-01

    Compared with object word learning, young children typically find learning color terms to be a difficult linguistic task. In this reflections article, I consider two questions that are fundamental to investigations into the developmental acquisition of color terms. First, I consider what constrains color term acquisition and how stable these…

  17. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  18. Balancing habitat delivery for breeding marsh birds and nonbreeding waterfowl: An integrated waterbird management and monitoring approach at Clarence Cannon National Wildlife Refuge, Missouri

    USGS Publications Warehouse

    Loges, Brian W.; Lyons, James E.; Tavernia, Brian G.

    2017-08-23

    The Clarence Cannon National Wildlife Refuge (CCNWR) in the Mississippi River flood plain of eastern Missouri provides high quality emergent marsh and moist-soil habitat benefitting both nesting marsh birds and migrating waterfowl. Staff of CCNWR manipulate water levels and vegetation in the 17 units of the CCNWR to provide conditions favorable to these two important guilds. Although both guilds include focal species at multiple planning levels and complement objectives to provide a diversity of wetland community types and water regimes, additional decision support is needed for choosing how much emergent marsh and moist-soil habitat should be provided through annual management actions.To develop decision guidance for balanced delivery of high-energy waterfowl habitat and breeding marsh bird habitat, two measureable management objectives were identified: nonbreeding Anas Linnaeus (dabbling duck) use-days and Rallus elegans (king rail) occupancy of managed units. Three different composite management actions were identified to achieve these objectives. Each composite management action is a unique combination of growing season water regime and soil disturbance. The three composite management actions are intense moist-soil management (moist-soil), intermediate moist-soil (intermediate), and perennial management, which idles soils disturbance (perennial). The two management objectives and three management options were used in a multi-criteria decision analysis to indicate resource allocations and inform annual decision making. Outcomes of the composite management actions were predicted in two ways and multi-criteria decision analysis was used with each set of predictions. First, outcomes were predicted using expert-elicitation techniques and a panel of subject matter experts. Second, empirical data from the Integrated Waterbird Management and Monitoring Initiative collected between 2010 and 2013 were used; where data were lacking, expert judgment was used. Also, a Bayesian decision model was developed that can be updated with monitoring data in an adaptive management framework.Optimal resource allocations were identified in the form of portfolios of composite management actions for the 17 units in the framework. A constrained optimization (linear programming) was used to maximize an objective function that was based on the sum of dabbling duck and king rail utility. The constraints, which included management costs and a minimum energetic carrying capacity (total moist-soil acres), were applied to balance habitat delivery for dabbling ducks and king rails. Also, the framework was constrained in some cases to apply certain management actions of interest to certain management units; these constraints allowed for a variety of hypothetical Habitat Management Plans, including one based on output from a hydrogeomorphic study of the refuge. The decision analysis thus created numerous refuge-wide scenarios, each representing a unique mix of options (one for each of 17 units) and associated benefits (i.e., outcomes with respect to two management objectives).Prepared in collaboration with the U.S. Fish and Wildlife Service, the decision framework presented here is designed as a decision-aiding tool for CCNWR managers who ultimately make difficult decisions each year with multiple objectives, multiple management units, and the complexity of natural systems. The framework also provides a way to document hypotheses about how the managed system functions. Furthermore, the framework identifies specific monitoring needs and illustrates precisely how monitoring data will be used for decision-aiding and adaptive management.

  19. Geopotential Field Anomaly Continuation with Multi-Altitude Observations

    NASA Technical Reports Server (NTRS)

    Kim, Jeong Woo; Kim, Hyung Rae; von Frese, Ralph; Taylor, Patrick; Rangelova, Elena

    2012-01-01

    Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.

  20. Geopotential Field Anomaly Continuation with Multi-Altitude Observations

    NASA Technical Reports Server (NTRS)

    Kim, Jeong Woo; Kim, Hyung Rae; vonFrese, Ralph; Taylor, Patrick; Rangelova, Elena

    2011-01-01

    Conventional gravity and magnetic anomaly continuation invokes the standard Poisson boundary condition of a zero anomaly at an infinite vertical distance from the observation surface. This simple continuation is limited, however, where multiple altitude slices of the anomaly field have been observed. Increasingly, areas are becoming available constrained by multiple boundary conditions from surface, airborne, and satellite surveys. This paper describes the implementation of continuation with multi-altitude boundary conditions in Cartesian and spherical coordinates and investigates the advantages and limitations of these applications. Continuations by EPS (Equivalent Point Source) inversion and the FT (Fourier Transform), as well as by SCHA (Spherical Cap Harmonic Analysis) are considered. These methods were selected because they are especially well suited for analyzing multi-altitude data over finite patches of the earth such as covered by the ADMAP database. In general, continuations constrained by multi-altitude data surfaces are invariably superior to those constrained by a single altitude data surface due to anomaly measurement errors and the non-uniqueness of continuation.

  1. Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters

    NASA Astrophysics Data System (ADS)

    Worthy, J. L., III; Holzinger, M. J.

    Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.

  2. Evaluation of an Interview Process for Admission Into a School of Pharmacy

    PubMed Central

    Friesner, Daniel L.

    2012-01-01

    Objective. To evaluate the doctor of pharmacy (PharmD) admissions interview process at North Dakota State University (NDSU). Methods. Faculty pairs interviewed candidates using a standardized grading rubric to evaluate qualitative parameters or attributes such as ethics, relevant life and work experience, emotional maturity, commitment to patient care, leadership, and understanding of the pharmacy profession. Total interview scores, individual attribute domain scores, and the consistency and reliability of the interviewers were assessed. Results. The total mean interview score for the candidate pool was 17.4 of 25 points. Mean scores for individual domains ranged from 2.3 to 3.0 on a Likert-scale of 0-4. Nine of the 11 faculty pairs showed no mean differences from their interview partner in total interview scores given. Evaluations by 8 of the 11 faculty pairs produced high interrater reliability. Conclusions. The current interview process is generally consistent and reliable; however, future improvements such as additional interviewer training and adoption of a multiple mini-interview format could be made. PMID:22438594

  3. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  4. Constraining the ensemble Kalman filter for improved streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.

  5. A New Target Object for Constraining Annihilating Dark Matter

    NASA Astrophysics Data System (ADS)

    Chan, Man Ho

    2017-07-01

    In the past decade, gamma-ray observations and radio observations of our Milky Way and the Milky Way dwarf spheroidal satellite galaxies put very strong constraints on annihilation cross sections of dark matter. In this paper, we suggest a new target object (NGC 2976) that can be used for constraining annihilating dark matter. The radio and X-ray data of NGC 2976 can put very tight constraints on the leptophilic channels of dark matter annihilation. The lower limits of dark matter mass annihilating via {e}+{e}-, {μ }+{μ }-, and {τ }+{τ }- channels are 200 GeV, 130 GeV, and 110 GeV, respectively, with the canonical thermal relic cross section. We suggest that this kind of large nearby dwarf galaxy with a relatively high magnetic field can be a good candidate for constraining annihilating dark matter in future analyses.

  6. Final Technical Report for EE0006091: H2Pump Hydrogen Recycling System Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staudt, Rhonda

    The objective of this project is to demonstrate the product readiness and to quantify the benefits and customer value proposition of H2Pump’s Hydrogen Recycling System (HRS-100™) by installing and analyzing the operation of multiple prototype 100-kg per day systems in real world customer locations. The data gathered will be used to measure reliability, demonstrate the value proposition to customers, and validate our business model. H2Pump will install, track and report multiple field demonstration systems in industrial heat treating and semi-conductor applications. The customer demonstrations will be used to develop case studies and showcase the benefits of the technology to drivemore » market adoption.« less

  7. Multi-camera digital image correlation method with distributed fields of view

    NASA Astrophysics Data System (ADS)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  8. Naive Theories of Social Groups

    ERIC Educational Resources Information Center

    Rhodes, Marjorie

    2012-01-01

    Four studies examined children's (ages 3-10, Total N = 235) naive theories of social groups, in particular, their expectations about how group memberships constrain social interactions. After introduction to novel groups of people, preschoolers (ages 3-5) reliably expected agents from one group to harm members of the other group (rather than…

  9. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  10. Reliable Wiring Harness

    NASA Technical Reports Server (NTRS)

    Gaspar, Kenneth C.

    1987-01-01

    New harness for electrical wiring includes plugs that do not loosen from vibration. Ground braids prevented from detaching from connectors and constrained so braids do not open into swollen "birdcage" sections. Spring of stainless steel encircles ground braid. Self-locking connector contains ratchet not only preventing connector from opening, but tightens when vibrated.

  11. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  12. Reliability of Autonomic Responses and Malaise Across Multiple Motion Sickness Stimulation Tests

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Toscano, William B.; Cowings, Patricia S.

    1993-01-01

    There is general agreement that a high degree of variability exists between subjects in their autonomic nervous system responses to motion sickness stimulation. Additionally, a paucity of data exists that examines the variability within an individual across repeated motion sickness tests. Investigators have also examined the relationship of autonomic responses to motion sickness development. These investigations have used analyses at discrete points in time to describe this relationship. This approach fails to address the time course of autonomic responses and malaise development throughout the motion sickness test. Our objectives were to examine the reliability of autonomic responses and malaise using the final minute of the motion sickness test across five testing occasions, to examine the reliability of the change in autonomic responses and the change in malaise across five testing occasions, and to examine the relationship between changes in autonomic responses and changes in malaise level across the entire motion sickness test. Our results indicate that, based on the final minute of testing, the autonomic responses of heart rate, blood volume pulse, and respiration rate are moderately stable across multiple tests. Changes in heart rate, blood volume pulse, respiration rate, and malaise throughout the test duration were less stable across the tests. We attribute this instability to variations in individual susceptibility and the error associated with estimating a measure of autonomic gain.

  13. Battling Arrow's Paradox to Discover Robust Water Management Alternatives

    NASA Astrophysics Data System (ADS)

    Kasprzyk, J. R.; Reed, P. M.; Hadka, D.

    2013-12-01

    This study explores whether or not Arrow's Impossibility Theorem, a theory of social choice, affects the formulation of water resources systems planning problems. The theorem discusses creating an aggregation function for voters choosing from more than three alternatives for society. The Impossibility Theorem is also called Arrow's Paradox, because when trying to add more voters, a single individual's preference will dictate the optimal group decision. In the context of water resources planning, our study is motivated by recent theoretical work that has generalized the insights for Arrow's Paradox to the design of complex engineered systems. In this framing of the paradox, states of society are equivalent to water planning or design alternatives, and the voters are equivalent to multiple planning objectives (e.g. minimizing cost or maximizing performance). Seen from this point of view, multi-objective water planning problems are functionally equivalent to the social choice problem described above. Traditional solutions to such multi-objective problems aggregate multiple performance measures into a single mathematical objective. The Theorem implies that a subset of performance concerns will inadvertently dictate the overall design evaluations in unpredictable ways using such an aggregation. We suggest that instead of aggregation, an explicit many-objective approach to water planning can help overcome the challenges posed by Arrow's Paradox. Many-objective planning explicitly disaggregates measures of performance while supporting the discovery of the planning tradeoffs, employing multiobjective evolutionary algorithms (MOEAs) to find solutions. Using MOEA-based search to address Arrow's Paradox requires that the MOEAs perform robustly with increasing problem complexity, such as adding additional objectives and/or decisions. This study uses comprehensive diagnostic evaluation of MOEA search performance across multiple problem formulations (both aggregated and many-objective) to show whether or not aggregating performance measures biases decision making. In this study, we explore this hypothesis using an urban water portfolio management case study in the Lower Rio Grande Valley. The diagnostic analysis shows that modern self-adaptive MOEA search is efficient, effective, and reliable for the more complex many-objective LRGV planning formulations. Results indicate that although many classical water systems planning frameworks seek to account for multiple objectives, the common practice of reducing the problem into one or more highly aggregated performance measures can severely and negatively bias planning decisions.

  14. Ten years of multiple data stream assimilation with the ORCHIDEE land surface model to improve regional to global simulated carbon budgets: synthesis and perspectives on directions for the future

    NASA Astrophysics Data System (ADS)

    Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.

    2017-12-01

    Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.

  15. NDE reliability and probability of detection (POD) evolution and paradigm shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Surendra

    2014-02-18

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less

  16. Evaluation of tropical Pacific observing systems using NCEP and GFDL ocean data assimilation systems

    NASA Astrophysics Data System (ADS)

    Xue, Yan; Wen, Caihong; Yang, Xiaosong; Behringer, David; Kumar, Arun; Vecchi, Gabriel; Rosati, Anthony; Gudgel, Rich

    2017-08-01

    The TAO/TRITON array is the cornerstone of the tropical Pacific and ENSO observing system. Motivated by the recent rapid decline of the TAO/TRITON array, the potential utility of TAO/TRITON was assessed for ENSO monitoring and prediction. The analysis focused on the period when observations from Argo floats were also available. We coordinated observing system experiments (OSEs) using the global ocean data assimilation system (GODAS) from the National Centers for Environmental Prediction and the ensemble coupled data assimilation (ECDA) from the Geophysical Fluid Dynamics Laboratory for the period 2004-2011. Four OSE simulations were conducted with inclusion of different subsets of in situ profiles: all profiles (XBT, moorings, Argo), all except the moorings, all except the Argo and no profiles. For evaluation of the OSE simulations, we examined the mean bias, standard deviation difference, root-mean-square difference (RMSD) and anomaly correlation against observations and objective analyses. Without assimilation of in situ observations, both GODAS and ECDA had large mean biases and RMSD in all variables. Assimilation of all in situ data significantly reduced mean biases and RMSD in all variables except zonal current at the equator. For GODAS, the mooring data is critical in constraining temperature in the eastern and northwestern tropical Pacific, while for ECDA both the mooring and Argo data is needed in constraining temperature in the western tropical Pacific. The Argo data is critical in constraining temperature in off-equatorial regions for both GODAS and ECDA. For constraining salinity, sea surface height and surface current analysis, the influence of Argo data was more pronounced. In addition, the salinity data from the TRITON buoys played an important role in constraining salinity in the western Pacific. GODAS was more sensitive to withholding Argo data in off-equatorial regions than ECDA because it relied on local observations to correct model biases and there were few XBT profiles in those regions. The results suggest that multiple ocean data assimilation systems should be used to assess sensitivity of ocean analyses to changes in the distribution of ocean observations to get more robust results that can guide the design of future tropical Pacific observing systems.

  17. SU-E-T-551: Monitor Unit Optimization in Stereotactic Body Radiation Therapy for Stage I Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, B-T; Lu, J-Y

    2015-06-15

    Purpose: The study aims to reduce the monitor units (MUs) in the stereotactic body radiation therapy (SBRT) treatment for lung cancer by adjusting the optimizing parameters. Methods: Fourteen patients suffered from stage I Non-Small Cell Lung Cancer (NSCLC) were enrolled. Three groups of parameters were adjusted to investigate their effects on MU numbers and organs at risk (OARs) sparing: (1) the upper objective of planning target volume (UOPTV); (2) strength setting in the MU constraining objective; (3) max MU setting in the MU constraining objective. Results: We found that the parameters in the optimizer influenced the MU numbers in amore » priority, strength and max MU dependent manner. MU numbers showed a decreasing trend with the UOPTV increasing. MU numbers with low, medium and high priority for the UOPTV were 428±54, 312±48 and 258±31 MU/Gy, respectively. High priority for UOPTV also spared the heart, cord and lung while maintaining comparable PTV coverage than the low and medium priority group. It was observed that MU numbers tended to decrease with the strength increasing and max MU setting decreasing. With maximum strength, the MU numbers reached its minimum while maintaining comparable or improved dose to the normal tissues. It was also found that the MU numbers continued to decline at 85% and 75% max MU setting but no longer to decrease at 50% and 25%. Combined with high priority for UOPTV and MU constraining objectives, the MU numbers can be decreased as low as 223±26 MU/Gy. Conclusion:: The priority of UOPTV, MU constraining objective in the optimizer impact on the MU numbers in SBRT treatment for lung cancer. Giving high priority to the UOPTV, setting the strength to maximum value and the max MU to 50% in the MU objective achieves the lowest MU numbers while maintaining comparable or improved OAR sparing.« less

  18. Reinventing User Applications for Mission Control

    NASA Technical Reports Server (NTRS)

    Trimble, Jay Phillip; Crocker, Alan R.

    2010-01-01

    In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.

  19. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2015-12-21

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine whenmore » CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.« less

  20. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2015-12-01

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine when CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.

  1. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    NASA Astrophysics Data System (ADS)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.

  2. Constraining the interaction between dark sectors with future HI intensity mapping observations

    NASA Astrophysics Data System (ADS)

    Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda

    2018-04-01

    We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.

  3. AN EXTENSION OF THE ATHENA++ CODE FRAMEWORK FOR GRMHD BASED ON ADVANCED RIEMANN SOLVERS AND STAGGERED-MESH CONSTRAINED TRANSPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Christopher J.; Stone, James M.; Gammie, Charles F.

    2016-08-01

    We present a new general relativistic magnetohydrodynamics (GRMHD) code integrated into the Athena++ framework. Improving upon the techniques used in most GRMHD codes, ours allows the use of advanced, less diffusive Riemann solvers, in particular HLLC and HLLD. We also employ a staggered-mesh constrained transport algorithm suited for curvilinear coordinate systems in order to maintain the divergence-free constraint of the magnetic field. Our code is designed to work with arbitrary stationary spacetimes in one, two, or three dimensions, and we demonstrate its reliability through a number of tests. We also report on its promising performance and scalability.

  4. New Evidence for a Black Hole in the Compact Binary Cygnus X-3

    NASA Technical Reports Server (NTRS)

    Shrader, Chris R.; Titarchuk, Lev; Shaposhnikov, Nikolai

    2010-01-01

    The bright and highly variable X-ray and radio source known as Cygnus X-3 was among the first X-ray sources discovered, yet it remains in many ways an enigma. Its known to consist of a massive. Wolf-Rayet primary in an extremely tight orbit with a compact object. Yet one of the most basic of pa.ranietern the mass of the compact object - is not known. Nor is it even clear whether its is a neutron star or a black hole. In this Paper we present our analysis of the broad-band high-energy continua covering a substantial range in luminosity and spectral morphology. We apply these results to a recently identified scaling relationship which has been demonstrated to provide reliable estimates of the compact object mass in a number of accretion powered binaries. This analysis leads us to conclude that the compact object in Cygnus X-3 has a mass greater than 4.2 solar mass thus clearly indicative of a black hole and as such resolving a longstanding issue. The full range of uncertainty in our analysis and from using a. range of recently published distance estimates constrains the compact object mass to lie between 4.2 solar mass and 14.4 solar mass. Our favored estimate, based on a 9.0 kpc distance estimate is approx. l0 solar mass, with the. error margin of 3.2 solar masses. This result may thus pose challenges to shared-envelope evolutionary models of compact binaries. as well as establishing Cygnus X-3 as the first confirmed accretion-powered galactic gamma: ray source.

  5. Level-Set Topology Optimization with Aeroelastic Constraints

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  6. Design of Launch Vehicle Flight Control Systems Using Ascent Vehicle Stability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Alaniz, Abran; Hall, Robert; Bedossian, Nazareth; Hall, Charles; Jackson, Mark

    2011-01-01

    A launch vehicle represents a complicated flex-body structural environment for flight control system design. The Ascent-vehicle Stability Analysis Tool (ASAT) is developed to address the complicity in design and analysis of a launch vehicle. The design objective for the flight control system of a launch vehicle is to best follow guidance commands while robustly maintaining system stability. A constrained optimization approach takes the advantage of modern computational control techniques to simultaneously design multiple control systems in compliance with required design specs. "Tower Clearance" and "Load Relief" designs have been achieved for liftoff and max dynamic pressure flight regions, respectively, in the presence of large wind disturbances. The robustness of the flight control system designs has been verified in the frequency domain Monte Carlo analysis using ASAT.

  7. Electric service reliability cost/worth assessment in a developing country

    NASA Astrophysics Data System (ADS)

    Pandey, Mohan Kumar

    Considerable work has been done in developed countries to optimize the reliability of electric power systems on the basis of reliability cost versus reliability worth. This has yet to be considered in most developing countries, where development plans are still based on traditional deterministic measures. The difficulty with these criteria is that they cannot be used to evaluate the economic impacts of changing reliability levels on the utility and the customers, and therefore cannot lead to an optimum expansion plan for the system. The critical issue today faced by most developing countries is that the demand for electric power is high and growth in supply is constrained by technical, environmental, and most importantly by financial impediments. Many power projects are being canceled or postponed due to a lack of resources. The investment burden associated with the electric power sector has already led some developing countries into serious debt problems. This thesis focuses on power sector issues facing by developing countries and illustrates how a basic reliability cost/worth approach can be used in a developing country to determine appropriate planning criteria and justify future power projects by application to the Nepal Integrated Electric Power System (NPS). A reliability cost/worth based system evaluation framework is proposed in this thesis. Customer surveys conducted throughout Nepal using in-person interviews with approximately 2000 sample customers are presented. The survey results indicate that the interruption cost is dependent on both customer and interruption characteristics, and it varies from one location or region to another. Assessments at both the generation and composite system levels have been performed using the customer cost data and the developed NPS reliability database. The results clearly indicate the implications of service reliability to the electricity consumers of Nepal, and show that the reliability cost/worth evaluation is both possible and practical in a developing country. The average customer interruption costs of Rs 35/kWh at Hierarchical Level I and Rs 26/kWh at Hierarchical Level II evaluated in this research work led to an optimum reserve margin of 7.5%, which is considerably lower than the traditional reserve margin of 15% used in the NPS. A similar conclusion may result in other developing countries facing difficulties in power system expansion planning using the traditional approach. A new framework for system planning is therefore recommended for developing countries which would permit an objective review of the traditional system planning approach, and the evaluation of future power projects using a new approach based on fundamental principles of power system reliability and economics.

  8. A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty

    PubMed Central

    Xu, Ye; Huang, Guohe; Xu, Ling

    2014-01-01

    Abstract In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies. PMID:25317037

  9. A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty.

    PubMed

    Xu, Ye; Huang, Guohe; Xu, Ling

    2014-10-01

    In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies.

  10. Scoring ultrasound synovitis in rheumatoid arthritis: a EULAR-OMERACT ultrasound taskforce-Part 2: reliability and application to multiple joints of a standardised consensus-based scoring system

    PubMed Central

    Terslev, Lene; Naredo, Esperanza; Aegerter, Philippe; Wakefield, Richard J; Backhaus, Marina; Balint, Peter; Bruyn, George A W; Iagnocco, Annamaria; Jousse-Joulin, Sandrine; Schmidt, Wolfgang A; Szkudlarek, Marcin; Conaghan, Philip G; Filippucci, Emilio

    2017-01-01

    Objectives To test the reliability of new ultrasound (US) definitions and quantification of synovial hypertrophy (SH) and power Doppler (PD) signal, separately and in combination, in a range of joints in patients with rheumatoid arthritis (RA) using the European League Against Rheumatisms–Outcomes Measures in Rheumatology (EULAR-OMERACT) combined score for PD and SH. Methods A stepwise approach was used: (1) scoring static images of metacarpophalangeal (MCP) joints in a web-based exercise and subsequently when scanning patients; (2) scoring static images of wrist, proximal interphalangeal joints, knee and metatarsophalangeal joints in a web-based exercise and subsequently when scanning patients using different acquisitions (standardised vs usual practice). For reliability, kappa coefficients (κ) were used. Results Scoring MCP joints in static images showed substantial intraobserver variability but good to excellent interobserver reliability. In patients, intraobserver reliability was the same for the two acquisition methods. Interobserver reliability for SH (κ=0.87) and PD (κ=0.79) and the EULAR-OMERACT combined score (κ=0.86) were better when using a ‘standardised’ scan. For the other joints, the intraobserver reliability was excellent in static images for all scores (κ=0.8–0.97) and the interobserver reliability marginally lower. When using standardised scanning in patients, the intraobserver was good (κ=0.64 for SH and the EULAR-OMERACT combined score, 0.66 for PD) and the interobserver reliability was also good especially for PD (κ range=0.41–0.92). Conclusion The EULAR-OMERACT score demonstrated moderate-good reliability in MCP joints using a standardised scan and is equally applicable in non-MCP joints. This scoring system should underpin improved reliability and consequently the responsiveness of US in RA clinical trials. PMID:28948984

  11. A framework to determine the locations of the environmental monitoring in an estuary of the Yellow Sea.

    PubMed

    Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong

    2018-06-04

    The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...

  13. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  14. Detecting objects in radiographs for homeland security

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Snyder, Hans

    2005-05-01

    We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.

  15. A Vision for Spaceflight Reliability: NASA's Objectives Based Strategy

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Evans, John; Hall, Tony

    2015-01-01

    In defining the direction for a new Reliability and Maintainability standard, OSMA has extracted the essential objectives that our programs need, to undertake a reliable mission. These objectives have been structured to lead mission planning through construction of an objective hierarchy, which defines the critical approaches for achieving high reliability and maintainability (R M). Creating a hierarchy, as a basis for assurance implementation, is a proven approach; yet, it holds the opportunity to enable new directions, as NASA moves forward in tackling the challenges of space exploration.

  16. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  17. The Spatial Distribution of Attention within and across Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2012-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a "grouped array"). Using a modified version of the Egly et…

  18. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  19. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    PubMed Central

    Li, Mengmeng; Feng, Qiang; Yang, Dezhen

    2018-01-01

    In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion. PMID:29584695

  20. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  1. How to assess communication, professionalism, collaboration and the other intrinsic CanMEDS roles in orthopedic residents: use of an objective structured clinical examination (OSCE)

    PubMed Central

    Dwyer, Tim; Takahashi, Susan Glover; Hynes, Melissa Kennedy; Herold, Jodi; Wasserstein, David; Nousiainen, Markku; Ferguson, Peter; Wadey, Veronica; Murnaghan, M. Lucas; Leroux, Tim; Semple, John; Hodges, Brian; Ogilvie-Harris, Darrell

    2014-01-01

    Background Assessing residents’ understanding and application of the 6 intrinsic CanMEDS roles (communicator, professional, manager, collaborator, health advocate, scholar) is challenging for postgraduate medical educators. We hypothesized that an objective structured clinical examination (OSCE) designed to assess multiple intrinsic CanMEDS roles would be sufficiently reliable and valid. Methods The OSCE comprised 6 10-minute stations, each testing 2 intrinsic roles using case-based scenarios (with or without the use of standardized patients). Residents were evaluated using 5-point scales and an overall performance rating at each station. Concurrent validity was sought by correlation with in-training evaluation reports (ITERs) from the last 12 months and an ordinal ranking created by program directors (PDs). Results Twenty-five residents from postgraduate years (PGY) 0, 3 and 5 participated. The interstation reliability for total test scores (percent) was 0.87, while reliability for each of the communicator, collaborator, manager and professional roles was greater than 0.8. Total test scores, individual station scores and individual CanMEDS role scores all showed a significant effect by PGY level. Analysis of the PD rankings of intrinsic roles demonstrated a high correlation with the OSCE role scores. A correlation was seen between ITER and OSCE for the communicator role, while the ITER medical expert and total scores highly correlated with the communicator, manager and professional OSCE scores. Conclusion An OSCE designed to assess the intrinsic CanMEDS roles was sufficiently valid and reliable for regular use in an orthopedic residency program. PMID:25078926

  2. Validity and reliability of the multidimensional assessment of fatigue scale in Iranian patients with relapsing-remitting subtype of multiple sclerosis.

    PubMed

    Behrangrad, Shabnam; Kordi Yoosefinejad, Amin

    2018-03-01

    The purpose of this study is to investigate the validity and reliability of the Persian version of the Multidimensional Assessment of Fatigue Scale (MAFS) in an Iranian population with multiple sclerosis. A self-reported survey on fatigue including the MAFS, Fatigue Impact Scale and demographic measures was completed by 130 patients with multiple sclerosis and 60 healthy persons sampled with a convenience method. Test-retest reliability and validity were evaluated 3 days apart. Construct validity of the MAFS was assessed with the Fatigue Impact Scale. The MAFS had high internal consistency (Cronbach's alpha >0.9) and 3-d test-retest reliability (intraclass correlation coefficient = 0.99). Correlation between the Fatigue Impact Scale and MAFS was high (r = 0.99). Correlation between MAFS scores and the Expanded Disability Status Scale was also strong (r = 0.85). Questionnaire items showed acceptable item-scale correlation (0.968-0.993). The Persian version of the MAFS appears to be a valid and reliable questionnaire. It is an appropriate short multidimensional instrument to assess fatigue in patients with multiple sclerosis in clinical practice and research. Implications for Rehabilitation The Persian version of Multidimensional Assessment of Fatigue is a valid and reliable instrument for the assessment and monitoring the fatigue in Persian-language patients with multiple sclerosis. It is very easy to administer and a time efficient scale in comparison to other instruments evaluating fatigue in patients with multiple sclerosis.

  3. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  4. The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance

    PubMed Central

    Gajewski, Daniel A.; Wallin, Courtney P.; Philbeck, John W.

    2016-01-01

    Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers. PMID:27398065

  5. Testing a Constrained MPC Controller in a Process Control Laboratory

    ERIC Educational Resources Information Center

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  6. Reliability, Validity and Utility of a Multiple Intelligences Assessment for Career Planning.

    ERIC Educational Resources Information Center

    Shearer, C. Branton

    "The Multiple Intelligences Developmental Assessment Scales" (MIDAS) is a self- (or other-) completed instrument which is based upon the theory of multiple intelligences. The validity, reliability, and utility data regarding the MIDAS are reported here. The measure consists of 7 main scales and 24 subscales which summarize a person's intellectual…

  7. Comparison of Difficulties and Reliabilities of Math-Completion and Multiple-Choice Item Formats.

    ERIC Educational Resources Information Center

    Oosterhof, Albert C.; Coats, Pamela K.

    Instructors who develop classroom examinations that require students to provide a numerical response to a mathematical problem are often very concerned about the appropriateness of the multiple-choice format. The present study augments previous research relevant to this concern by comparing the difficulty and reliability of multiple-choice and…

  8. Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  9. Production possibility frontiers and socioecological tradeoffs for restoration of fire adapted forests.

    PubMed

    Ager, Alan A; Day, Michelle A; Vogler, Kevin

    2016-07-01

    We used spatial optimization to analyze alternative restoration scenarios and quantify tradeoffs for a large, multifaceted restoration program to restore resiliency to forest landscapes in the western US. We specifically examined tradeoffs between provisional ecosystem services, fire protection, and the amelioration of key ecological stressors. The results revealed that attainment of multiple restoration objectives was constrained due to the joint spatial patterns of ecological conditions and socioeconomic values. We also found that current restoration projects are substantially suboptimal, perhaps the result of compromises in the collaborative planning process used by federal planners, or operational constraints on forest management activities. The juxtaposition of ecological settings with human values generated sharp tradeoffs, especially with respect to community wildfire protection versus generating revenue to support restoration and fire protection activities. The analysis and methods can be leveraged by ongoing restoration programs in many ways including: 1) integrated prioritization of restoration activities at multiple scales on public and adjoining private lands, 2) identification and mapping of conflicts between ecological restoration and socioeconomic objectives, 3) measuring the efficiency of ongoing restoration projects compared to the optimal production possibility frontier, 4) consideration of fire transmission among public and private land parcels as a prioritization metric, and 5) finding socially optimal regions along the production frontier as part of collaborative restoration planning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Radio Observations of Organics in Comets

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Charnley, Steven B.; Kuan, Yi-Jehng; Chuang, Yo-Ling; Villanueva, Geronimo; Coulson, Iain; Remijan, Anthony J.

    2012-01-01

    A major observational challenge in cometary science is to quantify the extent to which chemical compounds can be linked to either interstellar or nebular chemistry. Recently, there have been complimentary observations from multiple facilities to try to unravel the chemical complexity of comets and their origins. Incorporating results from various techniques can gain further insight into the abundances, production rates, distributions, and formation mechanisms of molecules in these objects [I]. Such studies have provided great detail towards molecules with a-typical chemistries, such as H2CO [2]. We report multiwavelength spectral observations of comets from two dynamical families including the JFC 103P/Hartley 2 and a long period comet C/2009 PI (Garradd) with the Arizona Radio Observatory's SMT and 12-m telescopes, as well as the NRAO Greenbank telescope, and the James Clerk Maxwell Telescope. Multiple parent volatiles (e.g. HCN, CH30H, CO) as well as daughter products (e.g, CS and 01-1) have been detected in these objects. We will present a comparison of molecular abundances in these comets to those observed in others, supporting a long-term effort of building a comet taxonomy based on composition. Previous work has revealed a range of abundances of parent species (from "organics-poor" to "organics-rich") with respect to water among comets [3,4,5], however the statistics are not well constrained.

  11. Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.

    PubMed

    Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen

    2017-02-01

    Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.

  12. Measuring first-line nurse manager work: instrument: development and testing.

    PubMed

    Cadmus, Edna; Wisniewska, Edyta K

    2013-12-01

    The objective of this study was to develop and test a 1st-line nurse manager (FLNM) work instrument to measure categories of work and frequency of activities. First-line nurse managers have been demonstrated to be key contributors in meeting organizational outcomes and patient and nurse satisfaction. Identifying the work of FLNMs is essential to help in the development of prioritization and sequence. The need for an instrument that can measure and categorize the work of FLNMs is indicated. The author-developed instrument was administered as a pilot study to 173 FLNMs in New Jersey. Descriptive statistics were analyzed, and validity and reliability were measured. Content validity was established through 2 focus groups using 10 FLNMs and conducting a survey of 5 chief nursing officers. Reliability was assessed by 13 of 16 FLNM participants using the test/retest method and quantified using percent agreement within a 10-day period. Those items with 70% agreement or more were identified as reliable and retained on the instrument. The content validity of the instrument is strong; further refinement and testing of the tool are indicated to improve the reliability and generalizability across multiple populations of leaders and settings.

  13. Incorporating Objective Function Information Into the Feasibility Rule for Constrained Evolutionary Optimization.

    PubMed

    Wang, Yong; Wang, Bing-Chuan; Li, Han-Xiong; Yen, Gary G

    2016-12-01

    When solving constrained optimization problems by evolutionary algorithms, an important issue is how to balance constraints and objective function. This paper presents a new method to address the above issue. In our method, after generating an offspring for each parent in the population by making use of differential evolution (DE), the well-known feasibility rule is used to compare the offspring and its parent. Since the feasibility rule prefers constraints to objective function, the objective function information has been exploited as follows: if the offspring cannot survive into the next generation and if the objective function value of the offspring is better than that of the parent, then the offspring is stored into a predefined archive. Subsequently, the individuals in the archive are used to replace some individuals in the population according to a replacement mechanism. Moreover, a mutation strategy is proposed to help the population jump out of a local optimum in the infeasible region. Note that, in the replacement mechanism and the mutation strategy, the comparison of individuals is based on objective function. In addition, the information of objective function has also been utilized to generate offspring in DE. By the above processes, this paper achieves an effective balance between constraints and objective function in constrained evolutionary optimization. The performance of our method has been tested on two sets of benchmark test functions, namely, 24 test functions at IEEE CEC2006 and 18 test functions with 10-D and 30-D at IEEE CEC2010. The experimental results have demonstrated that our method shows better or at least competitive performance against other state-of-the-art methods. Furthermore, the advantage of our method increases with the increase of the number of decision variables.

  14. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  15. Teachers' Epistemic Cognition in the Context of Dialogic Practice: A Question of Calibration?

    ERIC Educational Resources Information Center

    Bråten, Ivar; Muis, Krista R.; Reznitskaya, Alina

    2017-01-01

    In this article, we argue that teachers' epistemic cognition, in particular their thinking about epistemic aims and reliable processes for achieving those aims, may impact students' understanding of complex, controversial issues. This is because teachers' epistemic cognition may facilitate or constrain their implementation of instruction aiming to…

  16. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    ERIC Educational Resources Information Center

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  17. Spatiotemporal requirements of the Hainan gibbon: Does home range constrain recovery of the world's rarest ape?

    PubMed

    Bryant, Jessica V; Zeng, Xingyuan; Hong, Xiaojiang; Chatterjee, Helen J; Turvey, Samuel T

    2017-03-01

    Conservation management requires an evidence-based approach, as uninformed decisions can signify the difference between species recovery and loss. The Hainan gibbon, the world's rarest ape, reportedly exploits the largest home range of any gibbon species, with these apparently large spatial requirements potentially limiting population recovery. However, previous home range assessments rarely reported survey methods, effort, or analytical approaches, hindering critical evaluation of estimate reliability. For extremely rare species where data collection is challenging, it also is unclear what impact such limitations have on estimating home range requirements. We re-evaluated Hainan gibbon spatial ecology using 75 hr of observations from 35 contact days over 93 field-days across dry (November 2010-February 2011) and wet (June 2011-September 2011) seasons. We calculated home range area for three social groups (N = 21 individuals) across the sampling period, seasonal estimates for one group (based on 24 days of observation; 12 days per season), and between-group home range overlap using multiple approaches (Minimum Convex Polygon, Kernel Density Estimation, Local Convex Hull, Brownian Bridge Movement Model), and assessed estimate reliability and representativeness using three approaches (Incremental Area Analysis, spatial concordance, and exclusion of expected holes). We estimated a yearly home range of 1-2 km 2 , with 1.49 km 2 closest to the median of all estimates. Although Hainan gibbon spatial requirements are relatively large for gibbons, our new estimates are smaller than previous estimates used to explain the species' limited recovery, suggesting that habitat availability may be less important in limiting population growth. We argue that other ecological, genetic, and/or anthropogenic factors are more likely to constrain Hainan gibbon recovery, and conservation attention should focus on elucidating and managing these factors. Re-evaluation reveals Hainan gibbon home range as c. 1-2 km 2 . Hainan gibbon home range is, therefore, similar to other Nomascus gibbons. Limited data for extremely rare species does not necessarily prevent derivation of robust home range estimates. © 2016 Wiley Periodicals, Inc.

  18. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  19. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  20. The L dwarf/T dwarf transition: Multiplicity, magnetic activity and mineral meteorology across the hydrogen burning limit

    NASA Astrophysics Data System (ADS)

    Burgasser, A. J.

    2013-02-01

    The transition between the L dwarf and T dwarf spectral classes is one of the most remarkable along the stellar/brown dwarf main sequence, separating sources with photospheres containing mineral condensate clouds from those containing methane and ammonia gases. Unusual characteristics of this transition include a 1 μm brightening between late L and early T dwarfs observed in both parallax samples and coeval binaries; a spike in the multiplicity fraction; evidence of increased photometric variability, possibly arising from patchy cloud structures; and a delayed transition for young, planetary-mass objects. All of these features can be explained if this transition is governed by the ``rapid'' (nonequlibrium) rainout of clouds from the photosphere, triggered by temperature, surface gravity, metallicity and (perhaps) rotational effects. While the underlying mechanism of this rainout remains under debate, the transition is now being exploited to discover and precisely characterize tight (<1 AU) very low-mass binaries that can be used to test brown dwarf evolutionary and atmospheric theories, and resolved binaries that further constrain the properties of this remarkable transition.

  1. Theoretical constraints in the design of multivariable control systems

    NASA Technical Reports Server (NTRS)

    Rynaski, E. G.; Mook, D. Joseph; Depena, Juan

    1991-01-01

    The research being performed under NASA Grant NAG1-1361 involves a more clear understanding and definition of the constraints involved in the pole-zero placement or assignment process for multiple input, multiple output systems. Complete state feedback to more than a single controller under conditions of complete controllability and observability is redundant if pole placement alone is the design objective. The additional feedback gains, above and beyond those required for pole placement can be used for eignevalue assignment or zero placement of individual closed loop transfer functions. Because both poles and zeros of individual closed loop transfer functions strongly affect the dynamic response to a pilot command input, the pole-zero placement problem is important. When fewer controllers than degrees of freedom of motion are available, complete design freedom is not possible, the transmission zeros constrain the regions of possible pole-zero placement. The effect of transmission zero constraints on the design possibilities, selection of transmission zeros and the avoidance of producing non-minimum phase transfer functions is the subject of the research being performed under this grant.

  2. Bayesian Chance-Constrained Hydraulic Barrier Design under Geological Structure Uncertainty.

    PubMed

    Chitsazan, Nima; Pham, Hai V; Tsai, Frank T-C

    2015-01-01

    The groundwater community has widely recognized geological structure uncertainty as a major source of model structure uncertainty. Previous studies in aquifer remediation design, however, rarely discuss the impact of geological structure uncertainty. This study combines chance-constrained (CC) programming with Bayesian model averaging (BMA) as a BMA-CC framework to assess the impact of geological structure uncertainty in remediation design. To pursue this goal, the BMA-CC method is compared with traditional CC programming that only considers model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from salt water intrusion in the "1500-foot" sand and the "1700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address geological structure uncertainty, three groundwater models based on three different hydrostratigraphic architectures are developed. The results show that using traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from the connector wells is higher than the total pumpage of the protected public supply wells. While reducing the injection rate can be achieved by reducing the reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station may not be economically attractive. © 2014, National Ground Water Association.

  3. The Effect of SSM Grading on Reliability When Residual Items Have No Discriminating Power.

    ERIC Educational Resources Information Center

    Kane, Michael T.; Moloney, James M.

    Gilman and Ferry have shown that when the student's score on a multiple choice test is the total number of responses necessary to get all items correct, substantial increases in reliability can occur. In contrast, similar procedures giving partial credit on multiple choice items have resulted in relatively small gains in reliability. The analysis…

  4. Disturbance patterns in a socio-ecological system at multiple scales

    Treesearch

    G. Zurlini; Kurt H. Riitters; N. Zaccarelli; I. Petrosillo; K.B. Jones; L. Rossi

    2006-01-01

    Ecological systems with hierarchical organization and non-equilibrium dynamics require multiple-scale analyses to comprehend how a system is structured and to formulate hypotheses about regulatory mechanisms. Characteristic scales in real landscapes are determined by, or at least reflect, the spatial patterns and scales of constraining human interactions with the...

  5. Neural activity reveals perceptual grouping in working memory.

    PubMed

    Rabbitt, Laura R; Roberts, Daniel M; McDonald, Craig G; Peterson, Matthew S

    2017-03-01

    There is extensive evidence that the contralateral delay activity (CDA), a scalp recorded event-related brain potential, provides a reliable index of the number of objects held in visual working memory. Here we present evidence that the CDA not only indexes visual object working memory, but also the number of locations held in spatial working memory. In addition, we demonstrate that the CDA can be predictably modulated by the type of encoding strategy employed. When individual locations were held in working memory, the pattern of CDA modulation mimicked previous findings for visual object working memory. Specifically, CDA amplitude increased monotonically until working memory capacity was reached. However, when participants were instructed to group individual locations to form a constellation, the CDA was prolonged and reached an asymptote at two locations. This result provides neural evidence for the formation of a unitary representation of multiple spatial locations. Published by Elsevier B.V.

  6. VFS interjudge reliability using a free and directed search.

    PubMed

    Bryant, Karen N; Finnegan, Eileen; Berbaum, Kevin

    2012-03-01

    Reports in the literature suggest that clinicians demonstrate poor reliability in rating videofluoroscopic swallow (VFS) variables. Contemporary perception theories suggest that the methods used in VFS reliability studies constrain subjects to make judgments in an abnormal way. The purpose of this study was to determine whether a directed search or a free search approach to rating swallow studies results in better interjudge reliability. Ten speech pathologists served as judges. Five clinical judges were assigned to the directed search group (use checklist) and five to the free search group (unguided observations). Clinical judges interpreted 20 VFS examinations of swallowing. Interjudge reliability of ratings of dysphagia severity, affected stage of swallow, dysphagia symptoms, and attributes identified by clinical judges using a directed search was compared with that using a free search approach. Interjudge reliability for rating the presence of aspiration and penetration was significantly better using a free search ("substantial" to "almost perfect" agreement) compared to a directed search ("moderate" agreement). Reliability of dysphagia severity ratings ranged from "moderate" to "almost perfect" agreement for both methods of search. Reliability for reporting all other symptoms and attributes of dysphagia was variable and was not significantly different between the groups.

  7. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  8. Tapered whiskers are required for active tactile sensation.

    PubMed

    Hires, Samuel Andrew; Pammer, Lorenz; Svoboda, Karel; Golomb, David

    2013-11-19

    Many mammals forage and burrow in dark constrained spaces. Touch through facial whiskers is important during these activities, but the close quarters makes whisker deployment challenging. The diverse shapes of facial whiskers reflect distinct ecological niches. Rodent whiskers are conical, often with a remarkably linear taper. Here we use theoretical and experimental methods to analyze interactions of mouse whiskers with objects. When pushed into objects, conical whiskers suddenly slip at a critical angle. In contrast, cylindrical whiskers do not slip for biologically plausible movements. Conical whiskers sweep across objects and textures in characteristic sequences of brief sticks and slips, which provide information about the tactile world. In contrast, cylindrical whiskers stick and remain stuck, even when sweeping across fine textures. Thus the conical whisker structure is adaptive for sensor mobility in constrained environments and in feature extraction during active haptic exploration of objects and surfaces. DOI: http://dx.doi.org/10.7554/eLife.01350.001.

  9. Reliability and Clinical Significance of Mobility and Balance Assessments in Multiple Sclerosis

    ERIC Educational Resources Information Center

    Learmonth, Yvonne C.; Paul, Lorna; McFadyen, Angus K.; Mattison, Paul; Miller, Linda

    2012-01-01

    The aim of the study was to establish the test-retest reliability, clinical significance and precision of four mobility and balance measures--the Timed 25-Foot Walk, Six-minute Walk, Timed Up and Go and the Berg Balance Scale--in individuals moderately affected by multiple sclerosis. Twenty four participants with multiple sclerosis (Extended…

  10. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    ERIC Educational Resources Information Center

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  11. A clinical test of stepping and change of direction to identify multiple falling older adults.

    PubMed

    Dite, Wayne; Temple, Viviene A

    2002-11-01

    To establish the reliability and validity of a new clinical test of dynamic standing balance, the Four Square Step Test (FSST), to evaluate its sensitivity, specificity, and predictive value in identifying subjects who fall, and to compare it with 3 established balance and mobility tests. A 3-group comparison performed by using 3 validated tests and 1 new test. A rehabilitation center and university medical school in Australia. Eighty-one community-dwelling adults over the age of 65 years. Subjects were age- and gender-matched to form 3 groups: multiple fallers, nonmultiple fallers, and healthy comparisons. Not applicable. Time to complete the FSST and Timed Up and Go test and the number of steps to complete the Step Test and Functional Reach Test distance. High reliability was found for interrater (n=30, intraclass correlation coefficient [ICC]=.99) and retest reliability (n=20, ICC=.98). Evidence for validity was found through correlation with other existing balance tests. Validity was supported, with the FSST showing significantly better performance scores (P<.01) for each of the healthier and less impaired groups. The FSST also revealed a sensitivity of 85%, a specificity of 88% to 100%, and a positive predictive value of 86%. As a clinical test, the FSST is reliable, valid, easy to score, quick to administer, requires little space, and needs no special equipment. It is unique in that it involves stepping over low objects (2.5cm) and movement in 4 directions. The FSST had higher combined sensitivity and specificity for identifying differences between groups in the selected sample population of older adults than the 3 tests with which it was compared. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation

  12. New Approaches For Asteroid Spin State and Shape Modeling From Delay-Doppler Radar Images

    NASA Astrophysics Data System (ADS)

    Raissi, Chedy; Lamee, Mehdi; Mosiane, Olorato; Vassallo, Corinne; Busch, Michael W.; Greenberg, Adam; Benner, Lance A. M.; Naidu, Shantanu P.; Duong, Nicholas

    2016-10-01

    Delay-Doppler radar imaging is a powerful technique to characterize the trajectories, shapes, and spin states of near-Earth asteroids; and has yielded detailed models of dozens of objects. Reconstructing objects' shapes and spins from delay-Doppler data is a computationally intensive inversion problem. Since the 1990s, delay-Doppler data has been analyzed using the SHAPE software. SHAPE performs sequential single-parameter fitting, and requires considerable computer runtime and human intervention (Hudson 1993, Magri et al. 2007). Recently, multiple-parameter fitting algorithms have been shown to more efficiently invert delay-Doppler datasets (Greenberg & Margot 2015) - decreasing runtime while improving accuracy. However, extensive human oversight of the shape modeling process is still required. We have explored two new techniques to better automate delay-Doppler shape modeling: Bayesian optimization and a machine-learning neural network.One of the most time-intensive steps of the shape modeling process is to perform a grid search to constrain the target's spin state. We have implemented a Bayesian optimization routine that uses SHAPE to autonomously search the space of spin-state parameters. To test the efficacy of this technique, we compared it to results with human-guided SHAPE for asteroids 1992 UY4, 2000 RS11, and 2008 EV5. Bayesian optimization yielded similar spin state constraints within a factor of 3 less computer runtime.The shape modeling process could be further accelerated using a deep neural network to replace iterative fitting. We have implemented a neural network with a variational autoencoder (VAE), using a subset of known asteroid shapes and a large set of synthetic radar images as inputs to train the network. Conditioning the VAE in this manner allows the user to give the network a set of radar images and get a 3D shape model as an output. Additional development will be required to train a network to reliably render shapes from delay-Doppler images.This work was supported by NASA Ames, NVIDIA, Autodesk and the SETI Institute as part of the NASA Frontier Development Lab program.

  13. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  14. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  15. What does music express? Basic emotions and beyond

    PubMed Central

    Juslin, Patrik N.

    2013-01-01

    Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions. PMID:24046758

  16. Compression performance comparison in low delay real-time video for mobile applications

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2012-10-01

    This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.

  17. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY

    PubMed Central

    Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.

    2015-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198

  18. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  19. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  20. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    NASA Astrophysics Data System (ADS)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  1. Objects, Events and "to Be" Verbs in Spanish--An ERP Study of the Syntax-Semantics Interface

    ERIC Educational Resources Information Center

    Leone-Fernandez, Barbara; Molinaro, Nicola; Carreiras, Manuel; Barber, Horacio A.

    2012-01-01

    In Spanish, objects and events at subject position constrain the selection of different forms of the auxiliary verb "to be": locative predicates about objects require "estar en", while those relating to events require "ser en", both translatable as "to be in". Subjective ratings showed that while the "object + ser + en" is considered as incorrect,…

  2. CLON: Overlay Networks and Gossip Protocols for Cloud Environments

    NASA Astrophysics Data System (ADS)

    Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul

    Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.

  3. Measuring emotion regulation and emotional expression in breast cancer patients: A systematic review.

    PubMed

    Brandão, Tânia; Tavares, Rita; Schulz, Marc S; Matos, Paula Mena

    2016-02-01

    The important role of emotion regulation and expression in adaptation to breast cancer is now widely recognized. Studies have shown that optimal emotion regulation strategies, including less constrained emotional expression, are associated with better adaptation. Our objective was to systematically review measures used to assess the way women with breast cancer regulate their emotions. This systematic review was conducted in accordance with PRISMA guidelines. Nine different databases were searched. Data were independently extracted and assessed by two researchers. English-language articles that used at least one instrument to measure strategies to regulate emotions in women with breast cancer were included. Of 679 abstracts identified 59 studies were deemed eligible for inclusion. Studies were coded regarding their objectives, methods, and results. We identified 16 instruments used to measure strategies of emotion regulation and expression. The most frequently employed instrument was the Courtauld Emotional Control Scale. Few psychometric proprieties other than internal consistency were reported for most instruments. Many studies did not include important information regarding descriptive characteristics and psychometric properties of the instruments used. The instruments used tap different aspects of emotion regulation. Specific instruments should be explored further with regard to content, validity, and reliability in the context of breast cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The challenge of mapping between two medical coding systems.

    PubMed

    Wojcik, Barbara E; Stein, Catherine R; Devore, Raymond B; Hassell, L Harrison

    2006-11-01

    Deployable medical systems patient conditions (PCs) designate groups of patients with similar medical conditions and, therefore, similar treatment requirements. PCs are used by the U.S. military to estimate field medical resources needed in combat operations. Information associated with each of the 389 PCs is based on subject matter expert opinion, instead of direct derivation from standard medical codes. Currently, no mechanisms exist to tie current or historical medical data to PCs. Our study objective was to determine whether reliable conversion between PC codes and International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes is possible. Data were analyzed for three professional coders assigning all applicable ICD-9-CM diagnosis codes to each PC code. Inter-rater reliability was measured by using Cohen's K statistic and percent agreement. Methods were developed to calculate kappa statistics when multiple responses could be selected from many possible categories. Overall, we found moderate support for the possibility of reliable conversion between PCs and ICD-9-CM diagnoses (mean kappa = 0.61). Current PCs should be modified into a system that is verifiable with real data.

  5. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  6. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  7. Understanding and Integrating Multiple Science Texts: Summary Tasks Are Sometimes Better than Argument Tasks

    ERIC Educational Resources Information Center

    Gil, Laura; Braten, Ivar; Vidal-Abarca, Eduardo; Stromso, Helge I.

    2010-01-01

    One of the major challenges of a knowledge society is that students as well as other citizens must learn to understand and integrate information from multiple textual sources. Still, task and reader characteristics that may facilitate or constrain such intertextual processes are not well understood by researchers. In this study, we compare the…

  8. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  9. Properties of an eclipsing double white dwarf binary NLTT 11748

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.

    2014-01-10

    We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less

  10. Robust object matching for persistent tracking with heterogeneous features.

    PubMed

    Guo, Yanlin; Hsu, Steve; Sawhney, Harpreet S; Kumar, Rakesh; Shan, Ying

    2007-05-01

    This paper addresses the problem of matching vehicles across multiple sightings under variations in illumination and camera poses. Since multiple observations of a vehicle are separated in large temporal and/or spatial gaps, thus prohibiting the use of standard frame-to-frame data association, we employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. Furthermore, since our domain is aerial video tracking, in order to deal with poor image quality and large resolution and quality variations, our approach employs robust alignment and match measures for different stages of vehicle matching. Most notably, we employ a heterogeneous collection of features such as lines, points, and regions in an integrated matching framework. Heterogeneous features are shown to be important. Line and point features provide accurate localization and are employed for robust alignment across disparate views. The challenges of change in pose, aspect, and appearances across two disparate observations are handled by combining a novel feature-based quasi-rigid alignment with flexible matching between two or more sequences. However, since lines and points are relatively sparse, they are not adequate to delineate the object and provide a comprehensive matching set that covers the complete object. Region features provide a high degree of coverage and are employed for continuous frames to provide a delineation of the vehicle region for subsequent generation of a match measure. Our approach reliably delineates objects by representing regions as robust blob features and matching multiple regions to multiple regions using Earth Mover's Distance (EMD). Extensive experimentation under a variety of real-world scenarios and over hundreds of thousands of Confirmatory Identification (CID) trails has demonstrated about 95 percent accuracy in vehicle reacquisition with both visible and Infrared (IR) imaging cameras.

  11. Objectives, priorities, reliable knowledge, and science-based management of Missouri River interior least terns and piping plovers

    USGS Publications Warehouse

    Sherfy, Mark; Anteau, Michael J.; Shaffer, Terry; Sovada, Marsha; Stucker, Jennifer

    2011-01-01

    Supporting recovery of federally listed interior least tern (Sternula antillarum athalassos; tern) and piping plover (Charadrius melodus; plover) populations is a desirable goal in management of the Missouri River ecosystem. Many tools are implemented in support of this goal, including habitat management, annual monitoring, directed research, and threat mitigation. Similarly, many types of data can be used to make management decisions, evaluate system responses, and prioritize research and monitoring. The ecological importance of Missouri River recovery and the conservation status of terns and plovers place a premium on efficient and effective resource use. Efficiency is improved when a single data source informs multiple high-priority decisions, whereas effectiveness is improved when decisions are informed by reliable knowledge. Seldom will a single study design be optimal for addressing all data needs, making prioritization of needs essential. Data collection motivated by well-articulated objectives and priorities has many advantages over studies in which questions and priorities are determined retrospectively. Research and monitoring for terns and plovers have generated a wealth of data that can be interpreted in a variety of ways. The validity and strength of conclusions from analyses of these data is dependent on compatibility between the study design and the question being asked. We consider issues related to collection and interpretation of biological data, and discuss their utility for enhancing the role of science in management of Missouri River terns and plovers. A team of USGS scientists at Northern Prairie Wildlife Research Center has been conducting tern and plover research on the Missouri River since 2005. The team has had many discussions about the importance of setting objectives, identifying priorities, and obtaining reliable information to answer pertinent questions about tern and plover management on this river system. The objectives of this presentation are to summarize those conversations and to share insights about concepts that could contribute to rigorous science support for management of this river system.

  12. Visual Search as a Tool for a Quick and Reliable Assessment of Cognitive Functions in Patients with Multiple Sclerosis

    PubMed Central

    Utz, Kathrin S.; Hankeln, Thomas M. A.; Jung, Lena; Lämmer, Alexandra; Waschbisch, Anne; Lee, De-Hyung; Linker, Ralf A.; Schenk, Thomas

    2013-01-01

    Background Despite the high frequency of cognitive impairment in multiple sclerosis, its assessment has not gained entrance into clinical routine yet, due to lack of time-saving and suitable tests for patients with multiple sclerosis. Objective The aim of the study was to compare the paradigm of visual search with neuropsychological standard tests, in order to identify the test that discriminates best between patients with multiple sclerosis and healthy individuals concerning cognitive functions, without being susceptible to practice effects. Methods Patients with relapsing remitting multiple sclerosis (n = 38) and age-and gender-matched healthy individuals (n = 40) were tested with common neuropsychological tests and a computer-based visual search task, whereby a target stimulus has to be detected amongst distracting stimuli on a touch screen. Twenty-eight of the healthy individuals were re-tested in order to determine potential practice effects. Results Mean reaction time reflecting visual attention and movement time indicating motor execution in the visual search task discriminated best between healthy individuals and patients with multiple sclerosis, without practice effects. Conclusions Visual search is a promising instrument for the assessment of cognitive functions and potentially cognitive changes in patients with multiple sclerosis thanks to its good discriminatory power and insusceptibility to practice effects. PMID:24282604

  13. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.

    PubMed

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-11-02

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.

  14. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter

    PubMed Central

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-01-01

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts. PMID:27827832

  15. Coordinative Voltage Control Strategy with Multiple Resources for Distribution Systems of High PV Penetration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiangqi; Zhang, Yingchen

    This paper presents an optimal voltage control methodology with coordination among different voltage-regulating resources, including controllable loads, distributed energy resources such as energy storage and photovoltaics (PV), and utility voltage-regulating devices such as voltage regulators and capacitors. The proposed methodology could effectively tackle the overvoltage and voltage regulation device distortion problems brought by high penetrations of PV to improve grid operation reliability. A voltage-load sensitivity matrix and voltage-regulator sensitivity matrix are used to deploy the resources along the feeder to achieve the control objectives. Mixed-integer nonlinear programming is used to solve the formulated optimization control problem. The methodology has beenmore » tested on the IEEE 123-feeder test system, and the results demonstrate that the proposed approach could actively tackle the voltage problem brought about by high penetrations of PV and improve the reliability of distribution system operation.« less

  16. Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    PubMed Central

    Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen

    2011-01-01

    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717

  17. Comparisons of a Constrained Least Squares Model versus Human-in-the-Loop for Spectral Unmixing to Determine Material Type of GEO Debris

    NASA Technical Reports Server (NTRS)

    Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan

    2013-01-01

    Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.

  18. Constraining black holes with light boson hair and boson stars using epicyclic frequencies and quasiperiodic oscillations

    NASA Astrophysics Data System (ADS)

    Franchini, Nicola; Pani, Paolo; Maselli, Andrea; Gualtieri, Leonardo; Herdeiro, Carlos A. R.; Radu, Eugen; Ferrari, Valeria

    2017-06-01

    Light bosonic fields are ubiquitous in extensions of the Standard Model. Even when minimally coupled to gravity, these fields might evade the assumptions of the black-hole no-hair theorems and give rise to spinning black holes which can be drastically different from the Kerr metric. Furthermore, they allow for self-gravitating compact solitons, known as (scalar or Proca) boson stars. The quasiperiodic oscillations (QPOs) observed in the x-ray flux emitted by accreting compact objects carry information about the strong-field region, thus providing a powerful tool to constrain deviations from Kerr's geometry and to search for exotic compact objects. By using the relativistic precession model as a proxy to interpret the QPOs in terms of geodesic frequencies, we investigate how the QPO frequencies could be used to test the no-hair theorem and the existence of light bosonic fields near accreting compact objects. We show that a detection of two QPO triplets with current sensitivity can already constrain these models and that the future eXTP mission or a LOFT-like mission can set very stringent constraints on black holes with bosonic hair and on (scalar or Proca) boson stars. The peculiar geodesic structure of compact scalar/Proca boson stars implies that these objects can easily be ruled out as alternative models for x-ray source GRO J1655-40.

  19. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  20. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    NASA Astrophysics Data System (ADS)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  1. The better way to determine the validity, reliability, objectivity and accuracy of measuring devices.

    PubMed

    Pazira, Parvin; Rostami Haji-Abadi, Mahdi; Zolaktaf, Vahid; Sabahi, Mohammadfarzan; Pazira, Toomaj

    2016-06-08

    In relation to statistical analysis, studies to determine the validity, reliability, objectivity and precision of new measuring devices are usually incomplete, due in part to using only correlation coefficient and ignoring the data dispersion. The aim of this study was to demonstrate the best way to determine the validity, reliability, objectivity and accuracy of an electro-inclinometer or other measuring devices. Another purpose of this study is to answer the question of whether reliability and objectivity represent accuracy of measuring devices. The validity of an electro-inclinometer was examined by mechanical and geometric methods. The objectivity and reliability of the device was assessed by calculating Cronbach's alpha for repeated measurements by three raters and by measurements on the same person by mechanical goniometer and the electro-inclinometer. Measurements were performed on "hip flexion with the extended knee" and "shoulder abduction with the extended elbow." The raters measured every angle three times within an interval of two hours. The three-way ANOVA was used to determine accuracy. The results of mechanical and geometric analysis showed that validity of the electro-inclinometer was 1.00 and level of error was less than one degree. Objectivity and reliability of electro-inclinometer was 0.999, while objectivity of mechanical goniometer was in the range of 0.802 to 0.966 and the reliability was 0.760 to 0.961. For hip flexion, the difference between raters in joints angle measurement by electro-inclinometer and mechanical goniometer was 1.74 and 16.33 degree (P<0.05), respectively. The differences for shoulder abduction measurement by electro-inclinometer and goniometer were 0.35 and 4.40 degree (P<0.05). Although both the objectivity and reliability are acceptable, the results showed that measurement error was very high in the mechanical goniometer. Therefore, it can be concluded that objectivity and reliability alone cannot determine the accuracy of a device and it is preferable to use other statistical methods to compare and evaluate the accuracy of these two devices.

  2. Statistical Learning Is Constrained to Less Abstract Patterns in Complex Sensory Input (but not the Least)

    PubMed Central

    Emberson, Lauren L.; Rubinstein, Dani

    2016-01-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779

  3. Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Kwang, Abel

    1994-01-01

    This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  4. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  5. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  6. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  7. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  8. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  9. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  10. Constraining movement alters the recruitment of motor processes in mental rotation.

    PubMed

    Moreau, David

    2013-02-01

    Does mental rotation depend on the readiness to act? Recent evidence indicates that the involvement of motor processes in mental rotation is experience-dependent, suggesting that different levels of expertise in sensorimotor interactions lead to different strategies to solve mental rotation problems. Specifically, experts in motor activities perceive spatial material as objects that can be acted upon, triggering covert simulation of rotations. Because action simulation depends on the readiness to act, movement restriction should therefore disrupt mental rotation performance in individuals favoring motor processes. In this experiment, wrestlers and non-athletes judged whether pairs of three-dimensional stimuli were identical or different, with their hands either constrained or unconstrained. Wrestlers showed higher performance than controls in the rotation of geometric stimuli, but this difference disappeared when their hands were constrained. However, movement restriction had similar consequences for both groups in the rotation of hands. These findings suggest that expert's advantage in mental rotation of abstract objects is based on the readiness to act, even when physical manipulation is impossible.

  11. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques

    PubMed Central

    2012-01-01

    Background Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. Methods A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan’s year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. Results ICCs were generated for Taiwan’s year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. Conclusion We recommend using the ICC to annually assess a nation’s year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system. PMID:22587736

  12. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques.

    PubMed

    Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen

    2012-05-15

    Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.

  13. VIRUS ISOLATION AND MOLECULAR DETECTION OF BLUETONGUE AND EPIZOOTIC HEMORRHAGIC DISEASE VIRUSES FROM NATURALLY INFECTED WHITE-TAILED DEER (ODOCOILEUS VIRGINIANUS).

    PubMed

    Kienzle, Clara; Poulson, Rebecca L; Ruder, Mark G; Stallknecht, David E

    2017-10-01

    Hemorrhagic disease in North America is caused by multiple serotypes of epizootic hemorrhagic disease virus (EHDV) and bluetongue virus (BTV). Diagnostic tests for detection of EHDV and BTV include virus isolation (VI), reverse transcriptase (RT)-PCR, and real-time RT-PCR (rRT-PCR). Our objective was to compare the diagnostic capabilities of three rRT-PCR protocols for detection of EHDV and BTV from naturally infected white-tailed deer (Odocoileus virginianus). We compared the effectiveness of these assays to traditional viral detection methods (e.g., VI) for historic and current clinical cases. Because of the variable nature of tissue collection and storage before diagnostic testing, an evaluation of viral persistence on multiple freeze-thaw events was also conducted. Two of the rRT-PCR assays provided for reliable detection of EHDV and BTV from 100% of clinically affected and VI-confirmed infected animals. Additionally, no significant change in viral titer was observed on multiple freeze-thaw events.

  14. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, Mark W.; George, William A.; Pai, Robert Y.

    1985-01-01

    A technique for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support.

  15. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  16. Analysis of the psychometric properties of the Multiple Sclerosis Impact Scale-29 (MSIS-29) in relapsing–remitting multiple sclerosis using classical and modern test theory

    PubMed Central

    Wyrwich, KW; Phillips, GA; Vollmer, T; Guo, S

    2016-01-01

    Background Investigations using classical test theory support the psychometric properties of the original version of the Multiple Sclerosis Impact Scale (MSIS-29v1), a disease-specific measure of multiple sclerosis (MS) impact (physical and psychological subscales). Later, assessments of the MSIS-29v1 in an MS community-based sample using Rasch analysis led to revisions of the instrument’s response options (MSIS-29v2). Objective The objective of this paper is to evaluate the psychometric properties of the MSIS-29v1 in a clinical trial cohort of relapsing–remitting MS patients (RRMS). Methods Data from 600 patients with RRMS enrolled in the SELECT clinical trial were used. Assessments were performed at baseline and at Weeks 12, 24, and 52. In addition to traditional psychometric analyses, Item Response Theory (IRT) and Rasch analysis were used to evaluate the measurement properties of the MSIS-29v1. Results Both MSIS-29v1 subscales demonstrated strong reliability, construct validity, and responsiveness. The IRT and Rasch analysis showed overall support for response category threshold ordering, person-item fit, and item fit for both subscales. Conclusions Both MSIS-29v1 subscales demonstrated robust measurement properties using classical, IRT, and Rasch techniques. Unlike previous research using a community-based sample, the MSIS-29v1 was found to be psychometrically sound to assess physical and psychological impairments in a clinical trial sample of patients with RRMS. PMID:28607741

  17. System-wide versus component-specific trust using multiple aids.

    PubMed

    Keller, David; Rice, Stephen

    2010-01-01

    Previous research in operator trust toward automated aids has focused primarily on single aids. The current study focuses on how operator trust is affected by the presence of multiple aids. Two competing theories of multiple-trust are presented. A component-specific trust theory predicts that operators will differentially place their trust in automated aids that vary in reliability. A system-wide trust theory predicts that operators will treat multiple imperfect aids as one "system" and merge their trust across aids despite differences in the aids' reliability. A simulated flight task was used to test these theories, whereby operators performed a pursuit tracking task while concurrently monitoring multiple system gauges that were augmented with perfect or imperfect automated aids. The data revealed that a system-wide trust theory best predicted the data; operators merged their trust across both aids, behaving toward a perfectly reliable aid in the same manner as they did towards unreliable aids.

  18. The relationship between organisational communication and perception.

    PubMed

    Marynissen, H M F

    2011-01-01

    Both researchers and managers search for the most appropriate form of organisational communication. The aim of such an organisational communication is to influence the receivers' perception to confirm, adapt or change behaviour according to the sender's intention. This paper argues that to influence the receivers' perception, a specific form of communication that is embedded in a specific organisational culture is required. It also demands prior knowledge of the existing organisational schemata and the current perception concerning the topic that has to be communicated. The rationale is that three obstacles hinder the objectives of traditional communication strategies to influence perception according to the sender's objectives. The first challenge is that a receiver of a certain message never garners one single, clearly pronounced message conveyed by one single person. Yet, few studies are based on multiple messages from various sources. This makes most of the communication strategies in use obsolete. The second strain is the dual mode of thinking that forms organisational members' perceptions: the heuristic and the cogitative (Taleb, 2010). Most organisational communication theories are based on the paradigm in which receivers of information process this information in a rational way, while research in the field of neurobiology (Lehrer, 2009) indicates that rationality is dominated by emotions. The third difficulty is that organisational members constrain to well-established, ingrained schemas (Labianca et al., 2000; Balogun and Johnson, 2004). Based on these existing schemas, the scattered information from multiple sources, and the inability to process that information through cognitive reasoning, organisational members construct perceptions that are not in line with the objectives of the sender's communication. This article reviews different communication theories, points out key concepts in the literature on individual and collective perceptions, and suggests directions to further research.

  19. Participation in Decision Making as a Property of Complex Adaptive Systems: Developing and Testing a Measure

    PubMed Central

    Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.

    2013-01-01

    Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771

  20. Participation in decision making as a property of complex adaptive systems: developing and testing a measure.

    PubMed

    Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R

    2013-01-01

    Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.

  1. A Comparison of JPDA and Belief Propagation for Data Association in SSA

    NASA Astrophysics Data System (ADS)

    Rutten, M.; Williams, J.; Gordon, N.; Jah, M.; Baldwin, J.; Stauch, J.

    2014-09-01

    The process of initial orbit determination, or catalogue maintenance, using a set of unlabeled observations requires a method of choosing which observation was due to which object. Realities of imperfect sensors mean that the association must be made in the presence of both missed detections and false alarms. Data association is not only essential to processing observations it can also be one of the most significant computational bottlenecks. The constrained admissible region multiple hypothesis filter (CAR-MHF) is an algorithm for initial orbit determination using short-arc observations of space objects. CAR-MHF has used joint probabilistic data association (JPDA), a well-established approach to multi-target data association. A recent development in the target tracking literature is the use of graphical models to formulate data association problems. Using an approximate inference algorithm, belief propagation (BP), on the graphical model results in an algorithm this is both computationally efficient and accurate. This paper compares CAR-MHF using JPDA and CAR-MHF using BP for the problem of initial orbit determination on a set of deep-space objects. The results of the analysis will show that by using the BP algorithm there are significant gains in computational load without any statistically significant loss in overall performance of the orbit determination.

  2. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  3. Preparation of titanium oxide ceramic membranes

    DOEpatents

    Anderson, Marc A.; Xu, Qunyin

    1992-01-01

    A procedure is disclosed for the reliable production of either particulate or polymeric titanium ceramic membranes by a highly constrained sol-gel procedure. The critical constraints in the procedure include the choice of alkyl alcohol solvent, the amount of water and its rate of addition, the pH of the solution during hydrolysis, and the limit of sintering temperature applied to the resulting gels.

  4. Can Machine Scoring Deal with Broad and Open Writing Tests as Well as Human Readers?

    ERIC Educational Resources Information Center

    McCurry, Doug

    2010-01-01

    This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…

  5. Brief Report: Telephone Administration of the Autism Diagnostic Interview-Revised--Reliability and Suitability for Use in Research

    ERIC Educational Resources Information Center

    Ward-King, Jessica; Cohen, Ira L.; Penning, Henderika; Holden, Jeanette J. A.

    2010-01-01

    The Autism Diagnostic Interview-Revised is one of the "gold standard" diagnostic tools for autism spectrum disorders. It is traditionally administered face-to-face. Cost and geographical concerns constrain the employment of the ADI-R for large-scale research projects. The telephone interview is a reasonable alternative, but has not yet been…

  6. Preparation of titanium oxide ceramic membranes

    DOEpatents

    Anderson, M.A.; Xu, Q.

    1992-03-17

    A procedure is disclosed for the reliable production of either particulate or polymeric titanium ceramic membranes by a highly constrained sol-gel procedure. The critical constraints in the procedure include the choice of alkyl alcohol solvent, the amount of water and its rate of addition, the pH of the solution during hydrolysis, and the limit of sintering temperature applied to the resulting gels.

  7. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  8. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  9. Technical note: drifting versus anchored flux chambers for measuring greenhouse gas emissions from running waters

    NASA Astrophysics Data System (ADS)

    Lorke, A.; Bodmer, P.; Noss, C.; Alshboul, Z.; Koschorreck, M.; Somlai-Haase, C.; Bastviken, D.; Flury, S.; McGinnis, D. F.; Maeck, A.; Müller, D.; Premke, K.

    2015-12-01

    Stream networks have recently been discovered to be major but poorly constrained natural greenhouse gas (GHG) sources. A fundamental problem is that several measurement approaches have been used without cross-comparisons. Flux chambers represent a potentially powerful methodological approach if robust and reliable ways to use chambers on running water can be defined. Here we compare the use of anchored and freely drifting chambers on various streams with different flow velocities. The study clearly shows that (1) anchored chambers enhance turbulence under the chambers and thus elevate fluxes, (2) drifting chambers have a very small impact on the water turbulence under the chamber and thus generate more reliable fluxes, (3) the bias of the anchored chambers greatly depends on chamber design and sampling conditions, and (4) there is a promising method to reduce the bias from anchored chambers by using a flexible plastic foil collar to seal the chambers to the water surface, rather than having rigid chamber walls penetrating into the water. Altogether, these results provide novel guidance on how to apply flux chambers in running water, which will have important consequences for measurements to constrain the global GHG balances.

  10. Technical Note: Drifting vs. anchored flux chambers for measuring greenhouse gas emissions from running waters

    NASA Astrophysics Data System (ADS)

    Lorke, A.; Bodmer, P.; Noss, C.; Alshboul, Z.; Koschorreck, M.; Somlai, C.; Bastviken, D.; Flury, S.; McGinnis, D. F.; Maeck, A.; Müller, D.; Premke, K.

    2015-09-01

    Stream networks were recently discovered as major but poorly constrained natural greenhouse gas (GHG) sources. A fundamental problem is that several measurement approaches have been used without cross comparisons. Flux chambers represent a potentially powerful methodological approach if robust and reliable ways to use chambers on running water can be defined. Here we compare the use of anchored and freely drifting chambers on various streams having different flow velocities. The study clearly shows that (1) drifting chambers have a very small impact on the water turbulence under the chamber and thus generate more reliable fluxes, (2) anchored chambers enhance turbulence under the chambers and thus elevate fluxes, (3) the bias of the anchored chambers greatly depends on chamber design and sampling conditions, and (4) there is a promising method to reduce the bias from anchored chambers by using a flexible plastic foil seal to the water surface rather than having rigid chamber walls penetrating into the water. Altogether, these results provide novel guidance on how to apply flux chambers in running water, which will have important consequences for measurements to constrain the global GHG balances.

  11. An English language interface for constrained domains

    NASA Technical Reports Server (NTRS)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  12. The Validity and Test-Retest Reliability of the Leeds Multiple Sclerosis Quality of Life Scale in Turkish Patients

    ERIC Educational Resources Information Center

    Akbiyik, Derya Iren; Sumbuloglu, Vildan; Guney, Zafer; Armutlu, Kadriye; Korkmaz, Nilufer; Keser, Ilke; Yuksel, Muazzez Merve; Karabudak, Rana

    2009-01-01

    The aim of the study was to translate and test the reliability and validity of the Leeds Multiple Sclerosis Quality of Life Scale (LMSQoL) in Turkish patients with multiple sclerosis (MS). Demographic data of MS patients who had a registration in and followed up by a university hospital were recorded. The LMSQoL and Turkish Quality of Life…

  13. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  14. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    NASA Astrophysics Data System (ADS)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  15. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    PubMed

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  16. Topographic Response to the Yakutat Block Collision

    NASA Technical Reports Server (NTRS)

    Stock, Joann M.

    2000-01-01

    The principal objective of this grant and this research were to investigate the topographic development of an active glaciated orogenic belt in southern Alaska as that development relates to patterns of erosion and crustal deformation. A specific objective of the research was to investigate feedbacks between mountain building, orographic affects on climate, and patterns of exhumation and rock uplift. To that end, an orogen-scale analysis of topography was conducted with the aid of digital elevation models, magnitudes and patterns of crustal deformation were compiled from existing literature, present and past climate patterns were constrained using the modern and past distribution of glaciers, and styles, magnitudes, and extent of erosion were constrained with observations from the 1998 field season.

  17. [Multiple mini interviews before the occupation of main training posts in paediatrics].

    PubMed

    Hertel, Niels Thomas; Bjerager, Mia; Boas, Malene; Boisen, Kirsten A; Børch, Klaus; Frederiksen, Marianne Sjølin; Holm, Kirsten; Grum-Nymann, Anette; Johnsen, Martin M; Whitehouse, Stine; Balslev, Thomas

    2013-09-09

    Interviews are mandatory in Denmark when selecting doctors for training positions. We used multiple mini interviews (MMI) at four recruitment rounds for the main training posts in paediatrics. In total, 125 candidates were evaluated and assessed by CV and MMI (4-5 stations). Reliability for individual stations in MMI assessed by Cronbach's alpha was adequate (0.63-0.92). The overall reliability assessed by G-theory was lower, suggesting that different skills were tested. The acceptability was high. Our experiences with MMI suggest good feasibility and reliability. An increasing number of stations may improve the overall reliability.

  18. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  19. Chromatic illumination discrimination ability reveals that human colour constancy is optimised for blue daylight illuminations.

    PubMed

    Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D; Hurlbert, Anya

    2014-01-01

    The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.

  20. Characterizing Abundances of Volatiles in Comets Through Multiwavelength Observations

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Charnley, Steven B.; Kuan, Yi-Jehng; Chuang, Yo-Ling; DiSanti, Michael A.; Bonev, Boncho P.; Remijan, Anthony J.; Coulson, Iain; Haynes, Lillian; Stenborg, Maria

    2012-01-01

    Recently, there have been complimentary observations from multiple facilities to try to unravel the chemical complexity of comets. Incorporating results from various techniques, including: single-dish millimeter wavelength observations, interferometers, and/or IR spectroscopy, one can gain further insight into the abundances, production rates, distributions, and formation mechanisms of molecules in these objects [I]. Such studies have provided great detail towards molecules with a-typical chemistries, such as H2CO [2]. We report spectral observations of C/2007 N3 (Lulin), C/2009 R1 (McNaught), 103P/Hartley 2, and C/2009 P1 (Garradd) with the Arizona Radio Observatory's SMT and 12-m telescopes, as well as the NRAO Greenbank telescope and IRTF-CSHELL. Multiple parent volatiles (HCN, CH3OH, CO, CH4, C2H6, and H2O) as well as a number of daughter products (CS and OH) have been detected in these objects. We will present a comparison of molecular abundances in these comets to those observed in others, supporting a long-term effort of building a comet taxonomy based on composition. Previous work has revealed a range of abundances of parent species (from "organics-poor" to "organics-rich") with respect to water among comets [3,4,5], however the statistics are still poorly constrained and interpretations of the observed compositional diversity are uncertain. We gratefully acknowledge support from the NSF Astronomy and Astrophysics Program, the NASA Planetary Astronomy Program, NASA Planetary Atmospheres Program, and the NASA Astrobiology Program.

  1. Influence of Stellar Multiplicity On Planet Formation. III. Adaptive Optics Imaging of Kepler Stars With Gas Giant Planets

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Fischer, Debra A.; Horch, Elliott P.; Xie, Ji-Wei

    2015-06-01

    As hundreds of gas giant planets have been discovered, we study how these planets form and evolve in different stellar environments, specifically in multiple stellar systems. In such systems, stellar companions may have a profound influence on gas giant planet formation and evolution via several dynamical effects such as truncation and perturbation. We select 84 Kepler Objects of Interest (KOIs) with gas giant planet candidates. We obtain high-angular resolution images using telescopes with adaptive optics (AO) systems. Together with the AO data, we use archival radial velocity data and dynamical analysis to constrain the presence of stellar companions. We detect 59 stellar companions around 40 KOIs for which we develop methods of testing their physical association. These methods are based on color information and galactic stellar population statistics. We find evidence of suppressive planet formation within 20 AU by comparing stellar multiplicity. The stellar multiplicity rate (MR) for planet host stars is {0}-0+5% within 20 AU. In comparison, the stellar MR is 18% ± 2% for the control sample, i.e., field stars in the solar neighborhood. The stellar MR for planet host stars is 34% ± 8% for separations between 20 and 200 AU, which is higher than the control sample at 12% ± 2%. Beyond 200 AU, stellar MRs are comparable between planet host stars and the control sample. We discuss the implications of the results on gas giant planet formation and evolution.

  2. Fly Eye radar: detection through high scattered media

    NASA Astrophysics Data System (ADS)

    Molchanov, Pavlo; Gorwara, Ashok

    2017-05-01

    Longer radio frequency waves better penetrating through high scattered media than millimeter waves, but imaging resolution limited by diffraction at longer wavelength. Same time frequency and amplitudes of diffracted waves (frequency domain measurement) provides information of object. Phase shift of diffracted waves (phase front in time domain) consists information about shape of object and can be applied for reconstruction of object shape or even image by recording of multi-frequency digital hologram. Spectrum signature or refracted waves allows identify the object content. Application of monopulse method with overlap closely spaced antenna patterns provides high accuracy measurement of amplitude, phase, and direction to signal source. Digitizing of received signals separately in each antenna relative to processor time provides phase/frequency independence. Fly eye non-scanning multi-frequency radar system provides simultaneous continuous observation of multiple targets and wide possibilities for stepped frequency, simultaneous frequency, chaotic frequency sweeping waveform (CFS), polarization modulation for reliable object detection. Proposed c-band fly eye radar demonstrated human detection through 40 cm concrete brick wall with human and wall material spectrum signatures and can be applied for through wall human detection, landmines, improvised explosive devices detection, underground or camouflaged object imaging.

  3. Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues.

    PubMed

    Gepperth, Alexander R T; Rebhan, Sven; Hasler, Stephan; Fritsch, Jannik

    2011-03-01

    In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained.

  4. Constrained multi-objective optimization of storage ring lattices

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2018-03-01

    The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.

  5. Cosmology from galaxy clusters as observed by Planck

    NASA Astrophysics Data System (ADS)

    Pierpaoli, Elena

    We propose to use current all-sky data on galaxy clusters in the radio/infrared bands in order to constrain cosmology. This will be achieved performing parameter estimation with number counts and power spectra for galaxy clusters detected by Planck through their Sunyaev—Zeldovich signature. The ultimate goal of this proposal is to use clusters as tracers of matter density in order to provide information about fundamental properties of our Universe, such as the law of gravity on large scale, early Universe phenomena, structure formation and the nature of dark matter and dark energy. We will leverage on the availability of a larger and deeper cluster catalog from the latest Planck data release in order to include, for the first time, the cluster power spectrum in the cosmological parameter determination analysis. Furthermore, we will extend clusters' analysis to cosmological models not yet investigated by the Planck collaboration. These aims require a diverse set of activities, ranging from the characterization of the clusters' selection function, the choice of the cosmological cluster sample to be used for parameter estimation, the construction of mock samples in the various cosmological models with correct correlation properties in order to produce reliable selection functions and noise covariance matrices, and finally the construction of the appropriate likelihood for number counts and power spectra. We plan to make the final code available to the community and compatible with the most widely used cosmological parameter estimation code. This research makes use of data from the NASA satellites Planck and, less directly, Chandra, in order to constrain cosmology; and therefore perfectly fits the NASA objectives and the specifications of this solicitation.

  6. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  7. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    PubMed

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales.

  8. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions

    PubMed Central

    Hahn, Beth A.; Dreitz, Victoria J.; George, T. Luke

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer’s sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer’s sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales. PMID:29065128

  9. Self-Rated Estimates of Multiple Intelligences Based on Approaches to Learning

    ERIC Educational Resources Information Center

    Bowles, Terry

    2008-01-01

    To date questionnaires that measure Multiple Intelligences (MIs) have typically not been systematically developed, have poor psychometric properties, and relatively low reliability. The aim of this research was to define the factor structure, and reliability of nine talents which are the behavioural outcomes of MIs, using items representing…

  10. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching

    PubMed Central

    Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja

    2017-01-01

    Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information. PMID:28450826

  11. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching.

    PubMed

    Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja

    2017-01-01

    Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.

  12. Complex organic molecules during low-mass star formation: Pilot survey results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öberg, Karin I.; Graninger, Dawn; Lauck, Trish, E-mail: koberg@cfa.harvard.edu

    Complex organic molecules (COMs) are known to be abundant toward some low-mass young stellar objects (YSOs), but how these detections relate to typical COM abundance are not yet understood. We aim to constrain the frequency distribution of COMs during low-mass star formation, beginning with this pilot survey of COM lines toward six embedded YSOs using the IRAM 30 m Telescope. The sample was selected from the Spitzer c2d ice sample and covers a range of ice abundances. We detect multiple COMs, including CH{sub 3}CN, toward two of the YSOs, and tentatively toward a third. Abundances with respect to CH{sub 3}OHmore » vary between 0.7% and 10%. This sample is combined with previous COM observations and upper limits to obtain a frequency distributions of CH{sub 3}CN, HCOOCH{sub 3}, CH{sub 3}OCH{sub 3}, and CH{sub 3}CHO. We find that for all molecules more than 50% of the sample have detections or upper limits of 1%-10% with respect to CH{sub 3}OH. Moderate abundances of COMs thus appear common during the early stages of low-mass star formation. A larger sample is required, however, to quantify the COM distributions, as well as to constrain the origins of observed variations across the sample.« less

  13. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  14. The rise and fall of redundancy in decoherence and quantum Darwinism

    NASA Astrophysics Data System (ADS)

    Jess Riedel, C.; Zurek, Wojciech H.; Zwolak, Michael

    2012-08-01

    A state selected at random from the Hilbert space of a many-body system is overwhelmingly likely to exhibit highly non-classical correlations. For these typical states, half of the environment must be measured by an observer to determine the state of a given subsystem. The objectivity of classical reality—the fact that multiple observers can agree on the state of a subsystem after measuring just a small fraction of its environment—implies that the correlations found in nature between macroscopic systems and their environments are exceptional. Building on previous studies of quantum Darwinism showing that highly redundant branching states are produced ubiquitously during pure decoherence, we examine the conditions needed for the creation of branching states and study their demise through many-body interactions. We show that even constrained dynamics can suppress redundancy to the values typical of random states on relaxation timescales, and prove that these results hold exactly in the thermodynamic limit.

  15. NELIOTA: First temperature measurement of lunar impact flashes

    NASA Astrophysics Data System (ADS)

    Bonanos, A. Z.; Avdellidou, C.; Liakos, A.; Xilouris, E. M.; Dapergolas, A.; Koschny, D.; Bellas-Velidis, I.; Boumis, P.; Charmandaris, V.; Fytsilis, A.; Maroussis, A.

    2018-04-01

    We report the first scientific results from the NELIOTA (NEO Lunar Impacts and Optical TrAnsients) project, which has recently begun lunar monitoring observations with the 1.2-m Kryoneri telescope. NELIOTA aims to detect faint impact flashes produced by near-Earth meteoroids and asteroids and thereby help constrain the size-frequency distribution of near-Earth objects in the decimeter to meter range. The NELIOTA setup, consisting of two fast-frame cameras observing simultaneously in the R and I bands, enables - for the first time - direct analytical calculation of the flash temperatures. We present the first ten flashes detected, for which we find temperatures in the range 1600 to 3100 K, in agreement with theoretical values. Two of these flashes were detected on multiple frames in both filters and therefore yield the first measurements of the temperature drop for lunar flashes. In addition, we compute the impactor masses, which range between 100 g and 50 kg.

  16. Teen smoking cessation help via the Internet: a survey of search engines.

    PubMed

    Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I

    2003-07-01

    The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.

  17. FRAMES-2.0 Software System: Providing Password Protection and Limited Access to Models and Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Pelton, Mitch A.

    2007-08-09

    One of the most important concerns for regulatory agencies is the concept of reproducibility (i.e., reproducibility means credibility) of an assessment. One aspect of reproducibility deals with tampering of the assessment. In other words, when multiple groups are engaged in an assessment, it is important to lock down the problem that is to be solved and/or to restrict the models that are to be used to solve the problem. The objective of this effort is to provide the U.S. Nuclear Regulatory Commission (NRC) with a means to limit user access to models and to provide a mechanism to constrain themore » conceptual site models (CSMs) when appropriate. The purpose is to provide the user (i.e., NRC) with the ability to “lock down” the CSM (i.e., picture containing linked icons), restrict access to certain models, or both.« less

  18. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  19. Sediment budgeting and restoration planning in a heterogeneous landscape, the Root River watershed, southeastern Minnesota.

    NASA Astrophysics Data System (ADS)

    Hemmis, J. M.; Souffront, M.; Stout, J. C.; Belmont, P.

    2014-12-01

    Excessive sedimentation in streams and rivers is one of the top water quality concerns in the U.S. and globally. While sediment is a natural constituent of stream ecosystems, excessive amounts cause high levels of turbidity which can reduce primary and secondary production, reduce nutrient retention, and have negative impacts on fish reproduction and physiology. Fine sediment particles adsorb pollutants such as mercury, metals, polychlorinated biphenyl compounds and bacteria. Key questions remain regarding the origin of excessive sediment as well as the transport pathways of sediment through the landscape and channel network of the 4,300 km2 Root River watershed in southeastern Minnesota. To answer these questions, we are developing a sediment budget to account for inputs, outputs, and changes in sediment storage reservoirs within the system. Because watershed sediment fluxes are determined as the sum of many small changes (erosion and deposition) across a vast area, multiple, redundant techniques are required to adequately constrain all parts of the sediment budget. Specifically, this budget utilizes four years of field research and surveys, an extensive set of sediment fingerprinting data, watershed-wide measurements of channel widening and meander migration, and watershed modeling, all evaluated and extrapolated in a geomorphically sensitive manner. Analyses of sediment deposition within channel cutoffs throughout the watershed help constrain sediment storage. These overlapping methods, reconciled within the hard constraint of direct measurements of water and sediment fluxes, improve the reliability of the budget. The sediment budget highlights important sources and sinks and identifies locations that are likely to be more, or less, sensitive to changes in land and water management to support watershed-wide prioritization of conservation and restoration actions.

  20. Framework for evaluating disease severity measures in older adults with comorbidity.

    PubMed

    Boyd, Cynthia M; Weiss, Carlos O; Halter, Jeff; Han, K Carol; Ershler, William B; Fried, Linda P

    2007-03-01

    Accounting for the influence of concurrent conditions on health and functional status for both research and clinical decision-making purposes is especially important in older adults. Although approaches to classifying severity of individual diseases and conditions have been developed, the utility of these classification systems has not been evaluated in the presence of multiple conditions. We present a framework for evaluating severity classification systems for common chronic diseases. The framework evaluates the: (a) goal or purpose of the classification system; (b) physiological and/or functional criteria for severity graduation; and (c) potential reliability and validity of the system balanced against burden and costs associated with classification. Approaches to severity classification of individual diseases were not originally conceived for the study of comorbidity. Therefore, they vary greatly in terms of objectives, physiological systems covered, level of severity characterization, reliability and validity, and costs and burdens. Using different severity classification systems to account for differing levels of disease severity in a patient with multiple diseases, or, assessing global disease burden may be challenging. Most approaches to severity classification are not adequate to address comorbidity. Nevertheless, thoughtful use of some existing approaches and refinement of others may advance the study of comorbidity and diagnostic and therapeutic approaches to patients with multimorbidity.

  1. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  2. Measuring ability to assess claims about treatment effects: a latent trait analysis of items from the ‘Claim Evaluation Tools’ database using Rasch modelling

    PubMed Central

    Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D

    2017-01-01

    Background The Claim Evaluation Tools database contains multiple-choice items for measuring people’s ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. Objectives To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. Participants We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Results Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Conclusion Most of the items conformed well to the Rasch model’s expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. PMID:28550019

  3. Integrated Approach To Design And Analysis Of Systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Iverson, David L.

    1993-01-01

    Object-oriented fault-tree representation unifies evaluation of reliability and diagnosis of faults. Programming/fault tree described more fully in "Object-Oriented Algorithm For Evaluation Of Fault Trees" (ARC-12731). Augmented fault tree object contains more information than fault tree object used in quantitative analysis of reliability. Additional information needed to diagnose faults in system represented by fault tree.

  4. A SEARCH FOR MULTI-PLANET SYSTEMS USING THE HOBBY-EBERLY TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittenmyer, Robert A.; Endl, Michael; Cochran, William D.

    Extrasolar multiple-planet systems provide valuable opportunities for testing theories of planet formation and evolution. The architectures of the known multiple-planet systems demonstrate a fascinating level of diversity, which motivates the search for additional examples of such systems in order to better constrain their formation and dynamical histories. Here we describe a comprehensive investigation of 22 planetary systems in an effort to answer three questions: (1) are there additional planets? (2) where could additional planets reside in stable orbits? and (3) what limits can these observations place on such objects? We find no evidence for additional bodies in any of thesemore » systems; indeed, these new data do not support three previously announced planets (HD 20367 b: Udry et al.; HD 74156 d: Bean et al.; and 47 UMa c: Fischer et al.). The dynamical simulations show that nearly all of the 22 systems have large regions in which additional planets could exist in stable orbits. The detection-limit computations indicate that this study is sensitive to close-in Neptune-mass planets for most of the systems targeted. We conclude with a discussion on the implications of these nondetections.« less

  5. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, M.W.; George, W.A.; Pai, R.Y.

    1985-08-13

    A technique is disclosed for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support. 6 figs.

  6. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  7. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  8. [Using projective tests in forensic psychiatry may lead to wrong conclusions. Only empirically tested tests should be used].

    PubMed

    Trygg, L; Dåderman, A M; Wiklund, N; Meurling, A W; Lindgren, M; Lidberg, L; Levander, S

    2001-06-27

    The use of projective and psychometric psychological tests at the Department of Forensic Psychiatry in Stockholm (Huddinge), Sweden, was studied for a population of 60 men, including many patients with neuropsychological disabilities and multiple psychiatric disorders. The results showed that the use of projective tests like Rorschach, Object Relations Test, and House-Tree-Person was more frequent than the use of objective psychometric tests. Neuropsychological test batteries like the Halstead-Reitan Neuropsychological Test Battery or Luria-Nebraska Neuropsychological Battery were not used. The majority of patients were, however, assessed by intelligence scales like the WAIS-R. The questionable reliability and validity of the projective tests, and the risk of subjective interpretations, raise a problem when used in a forensic setting, since the courts' decisions about a sentence to prison or psychiatric care is based on the forensic psychiatric assessment. The use of objective psychometric neuropsychological tests and personality tests is recommended.

  9. Lack of Free Choice Reveals the Cost of Having to Search for More Than One Object

    PubMed Central

    Ort, Eduard; Fahrenfort, Johannes J.; Olivers, Christian N. L.

    2017-01-01

    It is debated whether people can actively search for more than one object or whether this results in switch costs. Using a gaze-contingent eye-tracking paradigm, we revealed a crucial role for cognitive control in multiple-target search. We instructed participants to simultaneously search for two target objects presented among distractors. In one condition, both targets were available, which gave the observer free choice of what to search for and allowed for proactive control. In the other condition, only one of the two targets was available, so that the choice was imposed, and a reactive mechanism would be required. No switch costs emerged when target choice was free, but switch costs emerged reliably when targets were imposed. Bridging contradictory findings, the results are consistent with models of visual selection in which only one attentional template actively drives selection and in which the efficiency of switching targets depends on the type of cognitive control allowed for by the environment. PMID:28661761

  10. Energy-constrained two-way assisted private and quantum capacities of quantum channels

    NASA Astrophysics Data System (ADS)

    Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.

    2018-06-01

    With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.

  11. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  12. Test-retest reliability and construct validity of the ENERGY-parent questionnaire on parenting practices, energy balance-related behaviours and their potential behavioural determinants: the ENERGY-project.

    PubMed

    Singh, Amika S; Chinapaw, Mai J M; Uijtdewilligen, Léonie; Vik, Froydis N; van Lippevelde, Wendy; Fernández-Alvira, Juan M; Stomfai, Sarolta; Manios, Yannis; van der Sluijs, Maria; Terwee, Caroline; Brug, Johannes

    2012-08-13

    Insight in parental energy balance-related behaviours, their determinants and parenting practices are important to inform childhood obesity prevention. Therefore, reliable and valid tools to measure these variables in large-scale population research are needed. The objective of the current study was to examine the test-retest reliability and construct validity of the parent questionnaire used in the ENERGY-project, assessing parental energy balance-related behaviours, their determinants, and parenting practices among parents of 10-12 year old children. We collected data among parents (n = 316 in the test-retest reliability study; n = 109 in the construct validity study) of 10-12 year-old children in six European countries, i.e. Belgium, Greece, Hungary, the Netherlands, Norway, and Spain. Test-retest reliability was assessed using the intra-class correlation coefficient (ICC) and percentage agreement comparing scores from two measurements, administered one week apart. To assess construct validity, the agreement between questionnaire responses and a subsequent interview was assessed using ICC and percentage agreement.All but one item showed good to excellent test-retest reliability as indicated by ICCs > .60 or percentage agreement ≥ 75%. Construct validity appeared to be good to excellent for 92 out of 121 items, as indicated by ICCs > .60 or percentage agreement ≥ 75%. From the other 29 items, construct validity was moderate for 24 and poor for 5 items. The reliability and construct validity of the items of the ENERGY-parent questionnaire on multiple energy balance-related behaviours, their potential determinants, and parenting practices appears to be good. Based on the results of the validity study, we strongly recommend adapting parts of the ENERGY-parent questionnaire if used in future research.

  13. Joint mobilization forces and therapist reliability in subjects with knee osteoarthritis

    PubMed Central

    Tragord, Bradley S; Gill, Norman W; Silvernail, Jason L; Teyhen, Deydre S; Allison, Stephen C

    2013-01-01

    Objectives: This study determined biomechanical force parameters and reliability among clinicians performing knee joint mobilizations. Methods: Sixteen subjects with knee osteoarthritis and six therapists participated in the study. Forces were recorded using a capacitive-based pressure mat for three techniques at two grades of mobilization, each with two trials of 15 seconds. Dosage (force–time integral), amplitude, and frequency were also calculated. Analysis of variance was used to analyze grade differences, intraclass correlation coefficients determined reliability, and correlations assessed force associations with subject and rater variables. Results: Grade IV mobilizations produced higher mean forces (P<0.001) and higher dosage (P<0.001), while grade III produced higher maximum forces (P = 0.001). Grade III forces (Newtons) by technique (mean, maximum) were: extension 48, 81; flexion 41, 68; and medial glide 21, 34. Grade IV forces (Newtons) by technique (mean, maximum) were: extension 58, 78; flexion 44, 60; and medial glide 22, 30. Frequency (Hertz) ranged between 0.9–1.1 (grade III) and 1.4–1.6 (grade IV). Intra-clinician reliability was excellent (>0.90). Inter-clinician reliability was moderate for force and dosage, and poor for amplitude and frequency. Discussion: Force measurements were consistent with previously reported ranges and clinical constructs. Grade III and grade IV mobilizations can be distinguished from each other with differences for force and frequency being small, and dosage and amplitude being large. Intra-clinician reliability was excellent for all biomechanical parameters and inter-clinician reliability for dosage, the main variable of clinical interest, was moderate. This study quantified the applied forces among multiple clinicians, which may help determine optimal dosage and standardize care. PMID:24421632

  14. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  15. Reliability study of high-brightness multiple single emitter diode lasers

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Yang, Thomas; Zhang, Cuipeng; Lang, Chao; Jiang, Xiaochen; Liu, Rui; Gao, Yanyan; Guo, Weirong; Jiang, Yuhua; Liu, Yang; Zhang, Luyan; Chen, Louisa

    2015-03-01

    In this study the chip bonding processes for various chips from various chip suppliers around the world have been optimized to achieve reliable chip on sub-mount for high performance. These chip on sub-mounts, for examples, includes three types of bonding, 8xx nm-1.2W/10.0W Indium bonded lasers, 9xx nm 10W-20W AuSn bonded lasers and 1470 nm 6W Indium bonded lasers will be reported below. The MTTF@25 of 9xx nm chip on sub-mount (COS) is calculated to be more than 203,896 hours. These chips from various chip suppliers are packaged into many multiple single emitter laser modules, using similar packaging techniques from 2 emitters per module to up to 7 emitters per module. A reliability study including aging test is performed on those multiple single emitter laser modules. With research team's 12 years' experienced packaging design and techniques, precise optical and fiber alignment processes and superior chip bonding capability, we have achieved a total MTTF exceeding 177,710 hours of life time with 60% confidence level for those multiple single emitter laser modules. Furthermore, a separated reliability study on wavelength stabilized laser modules have shown this wavelength stabilized module packaging process is reliable as well.

  16. Lightcurve Photometry Opportunities: 2018 April-June

    NASA Astrophysics Data System (ADS)

    Warner, Brian D.; Harris, Alan W.; Durech, Josef; Benner, Lance A. M.

    2018-04-01

    We present lists of asteroid photometry opportunities for objects reaching a favorable apparition and having either none or poorly-defined lightcurve parameters. Additional data on these objects will help with shape and spin axis modeling via lightcurve inversion. We also include lists of objects that will be the target of radar observations. Lightcurves for these objects can help constrain pole solutions and/or remove rotation period ambiguities that might not come from using radar data alone.

  17. Lightcurve Photometry Opportunities: 2018 July-September

    NASA Astrophysics Data System (ADS)

    Warner, Brian D.; Harris, Alan W.; Durech, Josef; Benner, Lance A. M.

    2018-07-01

    We present lists of asteroid photometry opportunities for objects reaching a favorable apparition and having either none or poorly-defined lightcurve parameters. Additional data on these objects will help with shape and spin axis modeling via lightcurve inversion. We also include lists of objects that will be the target of radar observations. Lightcurves for these objects can help constrain pole solutions and/or remove rotation period ambiguities that might not come from using radar data alone.

  18. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    PubMed Central

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-01-01

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408

  19. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  20. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  1. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  2. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  3. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  4. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  5. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  6. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  7. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  8. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  9. INTRA-RATER RELIABILITY OF THE MULTIPLE SINGLE-LEG HOP-STABILIZATION TEST AND RELATIONSHIPS WITH AGE, LEG DOMINANCE AND TRAINING.

    PubMed

    Sawle, Leanne; Freeman, Jennifer; Marsden, Jonathan

    2017-04-01

    Balance is a complex construct, affected by multiple components such as strength and co-ordination. However, whilst assessing an athlete's dynamic balance is an important part of clinical examination, there is no gold standard measure. The multiple single-leg hop-stabilization test is a functional test which may offer a method of evaluating the dynamic attributes of balance, but it needs to show adequate intra-tester reliability. The purpose of this study was to assess the intra-rater reliability of a dynamic balance test, the multiple single-leg hop-stabilization test on the dominant and non-dominant legs. Intra-rater reliability study. Fifteen active participants were tested twice with a 10-minute break between tests. The outcome measure was the multiple single-leg hop-stabilization test score, based on a clinically assessed numerical scoring system. Results were analysed using an Intraclass Correlations Coefficient (ICC 2,1 ) and Bland-Altman plots. Regression analyses explored relationships between test scores, leg dominance, age and training (an alpha level of p = 0.05 was selected). ICCs for intra-rater reliability were 0.85 for the dominant and non-dominant legs (confidence intervals = 0.62-0.95 and 0.61-0.95 respectively). Bland-Altman plots showed scores within two standard deviations. A significant correlation was observed between the dominant and non-dominant leg on balance scores (R 2 =0.49, p<0.05), and better balance was associated with younger participants in their non-dominant leg (R 2 =0.28, p<0.05) and their dominant leg (R 2 =0.39, p<0.05), and a higher number of hours spent training for the non-dominant leg R 2 =0.37, p<0.05). The multiple single-leg hop-stabilisation test demonstrated strong intra-tester reliability with active participants. Younger participants who trained more, have better balance scores. This test may be a useful measure for evaluating the dynamic attributes of balance. 3.

  10. 20 CFR 220.14 - Weighing of evidence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... capacity evaluation is based upon functional objective tests with high validity and reliability; (2) The... consists of objective findings of exams that have poor reliability or validity; (7) The evidence consists...

  11. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Jennings, Guy; Lee, Peter L.

    1995-02-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source.

  12. Stick-slip chaos in a mechanical oscillator with dry friction

    NASA Astrophysics Data System (ADS)

    Kousaka, Takuji; Asahara, Hiroyuki; Inaba, Naohiko

    2018-03-01

    This study analyzes a forced mechanical dynamical system with dry friction that can generate chaotic stick-slip vibrations. We find that the dynamics proposed by Yoshitake et al. [Trans. Jpn. Soc. Mech. Eng. C 61, 768 (1995)] can be expressed as a nonautonomous constraint differential equation owing to the static friction force. The object is constrained to the surface of a moving belt by a static friction force from when it sticks to the surface until the force on the object exceeds the maximal static friction force. We derive a 1D Poincaré return map from the constrained mechanical system, and prove numerically that this 1D map has an absolutely continuous invariant measure and a positive Lyapunov exponent, providing strong evidence for chaos.

  13. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    PubMed

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  14. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  15. Adaptive Precoded MIMO for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Nabilla, A. F.; Tiong, T. C.

    2015-04-01

    Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.

  16. Evaluation of Fear Using Nonintrusive Measurement of Multimodal Sensors

    PubMed Central

    Choi, Jong-Suk; Bang, Jae Won; Heo, Hwan; Park, Kang Ryoung

    2015-01-01

    Most previous research into emotion recognition used either a single modality or multiple modalities of physiological signal. However, the former method allows for limited enhancement of accuracy, and the latter has the disadvantages that its performance can be affected by head or body movements. Further, the latter causes inconvenience to the user due to the sensors attached to the body. Among various emotions, the accurate evaluation of fear is crucial in many applications, such as criminal psychology, intelligent surveillance systems and the objective evaluation of horror movies. Therefore, we propose a new method for evaluating fear based on nonintrusive measurements obtained using multiple sensors. Experimental results based on the t-test, the effect size and the sum of all of the correlation values with other modalities showed that facial temperature and subjective evaluation are more reliable than electroencephalogram (EEG) and eye blinking rate for the evaluation of fear. PMID:26205268

  17. VizieR Online Data Catalog: Massive stars in 30 Dor (Schneider+, 2018)

    NASA Astrophysics Data System (ADS)

    Schneider, F. R. N.; Sana, H.; Evans, C. J.; Bestenlehner, J. M.; Castro, N.; Fossati, L.; Grafener, G.; Langer, N.; Ramirez-Agudelo, O. H.; Sabin-Sanjulian, C.; Simon-Diaz, S.; Tramper, F.; Crowther, P. A.; de Koter, A.; de Mink, S. E.; Dufton, P. L.; Garcia, M.; Gieles, M.; Henault-Brunet, V.; Herrero, A.; Izzard, R. G.; Kalari, V.; Lennon, D. J.; Apellaniz, J. M.; Markova, N.; Najarro, F.; Podsiadlowski, P.; Puls, J.; Taylor, W. D.; van Loon, J. T.; Vink, J. S.; Norman, C.

    2018-02-01

    Through the use of the Fibre Large Array Multi Element Spectrograph (FLAMES) on the Very Large Telescope (VLT), the VLT-FLAMES Tarantula Survey (VFTS) has obtained optical spectra of ~800 massive stars in 30 Dor, avoiding the core region of the dense star cluster R136 because of difficulties with crowding. Repeated observations at multiple epochs allow determination of the orbital motion of potentially binary objects. For a sample of 452 apparently single stars, robust stellar parameters-such as effective temperatures, luminosities, surface gravities, and projected rotational velocities-are determined by modeling the observed spectra. Composite spectra of visual multiple systems and spectroscopic binaries are not considered here because their parameters cannot be reliably inferred from the VFTS data. To match the derived atmospheric parameters of the apparently single VFTS stars to stellar evolutionary models, we use the Bayesian code Bonnsai. (2 data files).

  18. Thermodynamics of quantum systems with multiple conserved quantities

    PubMed Central

    Guryanova, Yelena; Popescu, Sandu; Short, Anthony J.; Silva, Ralph; Skrzypczyk, Paul

    2016-01-01

    Recently, there has been much progress in understanding the thermodynamics of quantum systems, even for small individual systems. Most of this work has focused on the standard case where energy is the only conserved quantity. Here we consider a generalization of this work to deal with multiple conserved quantities. Each conserved quantity, which, importantly, need not commute with the rest, can be extracted and stored in its own battery. Unlike the standard case, in which the amount of extractable energy is constrained, here there is no limit on how much of any individual conserved quantity can be extracted. However, other conserved quantities must be supplied, and the second law constrains the combination of extractable quantities and the trade-offs between them. We present explicit protocols that allow us to perform arbitrarily good trade-offs and extract arbitrarily good combinations of conserved quantities from individual quantum systems. PMID:27384384

  19. Neural Bases Of Food Perception: Coordinate-Based Meta-Analyses Of Neuroimaging Studies In Multiple Modalities

    PubMed Central

    Huerta, Claudia I; Sarkar, Pooja R; Duong, Timothy Q.; Laird, Angela R; Fox, Peter T

    2013-01-01

    Objective The purpose of this study was to compare the results of the three food-cue paradigms most commonly used for functional neuroimaging studies to determine: i) commonalities and differences in the neural response patterns by paradigm; and, ii) the relative robustness and reliability of responses to each paradigm. Design and Methods functional magnetic resonance imaging (fMRI) studies using standardized stereotactic coordinates to report brain responses to food cues were identified using on-line databases. Studies were grouped by food-cue modality as: i) tastes (8 studies); ii) odors (8 studies); and, iii) images (11 studies). Activation likelihood estimation (ALE) was used to identify statistically reliable regional responses within each stimulation paradigm. Results Brain response distributions were distinctly different for the three stimulation modalities, corresponding to known differences in location of the respective primary and associative cortices. Visual stimulation induced the most robust and extensive responses. The left anterior insula was the only brain region reliably responding to all three stimulus categories. Conclusions These findings suggest visual food-cue paradigm as promising candidate for imaging studies addressing the neural substrate of therapeutic interventions. PMID:24174404

  20. Design optimization of a fuzzy distributed generation (DG) system with multiple renewable energy sources

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2012-09-01

    The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.

  1. Development of microcomputer-based mental acuity tests for repeated-measures studies

    NASA Technical Reports Server (NTRS)

    Kennedy, R. S.; Wilkes, R. L.; Baltzley, D. R.; Fowlkes, J. E.

    1990-01-01

    The purpose of this report is to detail the development of the Automated Performance Test System (APTS), a computer battery of mental acuity tests that can be used to assess human performance in the presence of toxic elements and environmental stressors. There were four objectives in the development of APTS. First, the technical requirements for developing APTS followed the tenets of the classical theory of mental tests which requires that tests meet set criteria like stability and reliability (the lack of which constitutes insensitivity). To be employed in the study of the exotic conditions of protracted space flight, a battery with multiple parallel forms is required. The second criteria was for the battery to have factorial multidimensionality and the third was for the battery to be sensitive to factors known to compromise performance. A fourth objective was for the tests to converge on the abilities entailed in mission specialist tasks. A series of studies is reported in which candidate APTS tests were subjected to an examination of their psychometric properties for repeated-measures testing. From this work, tests were selected that possessed the requisite metric properties of stability, reliability, and factor richness. In addition, studies are reported which demonstrate the predictive validity of the tests to holistic measures of intelligence.

  2. Automated single-trial assessment of laser-evoked potentials as an objective functional diagnostic tool for the nociceptive system.

    PubMed

    Hatem, S M; Hu, L; Ragé, M; Gierasimowicz, A; Plaghki, L; Bouhassira, D; Attal, N; Iannetti, G D; Mouraux, A

    2012-12-01

    To assess the clinical usefulness of an automated analysis of event-related potentials (ERPs). Nociceptive laser-evoked potentials (LEPs) and non-nociceptive somatosensory electrically-evoked potentials (SEPs) were recorded in 37 patients with syringomyelia and 21 controls. LEP and SEP peak amplitudes and latencies were estimated using a single-trial automated approach based on time-frequency wavelet filtering and multiple linear regression, as well as a conventional approach based on visual inspection. The amplitudes and latencies of normal and abnormal LEP and SEP peaks were identified reliably using both approaches, with similar sensitivity and specificity. Because the automated approach provided an unbiased solution to account for average waveforms where no ERP could be identified visually, it revealed significant differences between patients and controls that were not revealed using the visual approach. The automated analysis of ERPs characterized reliably and objectively LEP and SEP waveforms in patients. The automated single-trial analysis can be used to characterize normal and abnormal ERPs with a similar sensitivity and specificity as visual inspection. While this does not justify its use in a routine clinical setting, the technique could be useful to avoid observer-dependent biases in clinical research. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Landscape Simplification Constrains Adult Size in a Native Ground-Nesting Bee

    PubMed Central

    Renauld, Miles; Hutchinson, Alena; Loeb, Gregory; Poveda, Katja; Connelly, Heather

    2016-01-01

    Bees provide critical pollination services to 87% of angiosperm plants; however, the reliability of these services may become threatened as bee populations decline. Agricultural intensification, resulting in the simplification of environments at the landscape scale, greatly changes the quality and quantity of resources available for female bees to provision their offspring. These changes may alter or constrain the tradeoffs in maternal investment allocation between offspring size, number and sex required to maximize fitness. Here we investigate the relationship between landscape scale agricultural intensification and the size and number of individuals within a wild ground nesting bee species, Andrena nasonii. We show that agricultural intensification at the landscape scale was associated with a reduction in the average size of field collected A. nasonii adults in highly agricultural landscapes but not with the number of individuals collected. Small females carried significantly smaller (40%) pollen loads than large females, which is likely to have consequences for subsequent offspring production and fitness. Thus, landscape simplification is likely to constrain allocation of resources to offspring through a reduction in the overall quantity, quality and distribution of resources. PMID:26943127

  4. Landscape Simplification Constrains Adult Size in a Native Ground-Nesting Bee.

    PubMed

    Renauld, Miles; Hutchinson, Alena; Loeb, Gregory; Poveda, Katja; Connelly, Heather

    2016-01-01

    Bees provide critical pollination services to 87% of angiosperm plants; however, the reliability of these services may become threatened as bee populations decline. Agricultural intensification, resulting in the simplification of environments at the landscape scale, greatly changes the quality and quantity of resources available for female bees to provision their offspring. These changes may alter or constrain the tradeoffs in maternal investment allocation between offspring size, number and sex required to maximize fitness. Here we investigate the relationship between landscape scale agricultural intensification and the size and number of individuals within a wild ground nesting bee species, Andrena nasonii. We show that agricultural intensification at the landscape scale was associated with a reduction in the average size of field collected A. nasonii adults in highly agricultural landscapes but not with the number of individuals collected. Small females carried significantly smaller (40%) pollen loads than large females, which is likely to have consequences for subsequent offspring production and fitness. Thus, landscape simplification is likely to constrain allocation of resources to offspring through a reduction in the overall quantity, quality and distribution of resources.

  5. An Innovative Excel Application to Improve Exam Reliability in Marketing Courses

    ERIC Educational Resources Information Center

    Keller, Christopher M.; Kros, John F.

    2011-01-01

    Measures of survey reliability are commonly addressed in marketing courses. One statistic of reliability is "Cronbach's alpha." This paper presents an application of survey reliability as a reflexive application of multiple-choice exam validation. The application provides an interactive decision support system that incorporates survey item…

  6. Meta-Analysis of Scale Reliability Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2013-01-01

    A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…

  7. Imaging reconstruction for infrared interferometry: first images of YSOs environment

    NASA Astrophysics Data System (ADS)

    Renard, S.; Malbet, F.; Thiébaut, E.; Berger, J.-P.

    2008-07-01

    The study of protoplanetary disks, where the planets are believed to form, will certainly allow the formation of our Solar System to be understood. To conduct observations of these objects at the milli-arcsecond scale, infrared interferometry provides the right performances for T Tauri, FU Ori or Herbig Ae/Be stars. However, the only information obtained so far are scarce visibility measurements which are directly tested with models. With the outcome of recent interferometers, one can foresee obtaining images reconstructed independently of the models. In fact, several interferometers including IOTA and AMBER on the VLTI already provide the possibility to recombine three telescopes at once and thus to obtain the data necessary to reconstruct images. In this paper, we describe the use of MIRA, an image reconstruction algorithm developed for optical interferometry data (squared visibilities and closure phases) by E. Thiébaut. We foresee also to use the spectral information given by AMBER data to constrain even better the reconstructed images. We describe the use of MIRA to reconstruct images of young stellar objects out of actual data, in particular the multiple system GW Orionis (IOTA, 2004), and discuss the encountered difficulties.

  8. LED induced autofluorescence (LIAF) imager with eight multi-filters for oral cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Wei; Cheng, Nai-Lun; Tsai, Ming-Hsui; Chiou, Jin-Chern; Mang, Ou-Yang

    2016-03-01

    Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.

  9. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  10. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  11. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    NASA Astrophysics Data System (ADS)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  12. Estimating the Reliability of Electronic Parts in High Radiation Fields

    NASA Technical Reports Server (NTRS)

    Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd

    2008-01-01

    Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.

  13. To Pass or Not to Pass: Modeling the Movement and Affordance Dynamics of a Pick and Place Task

    PubMed Central

    Lamb, Maurice; Kallen, Rachel W.; Harrison, Steven J.; Di Bernardo, Mario; Minai, Ali; Richardson, Michael J.

    2017-01-01

    Humans commonly engage in tasks that require or are made more efficient by coordinating with other humans. In this paper we introduce a task dynamics approach for modeling multi-agent interaction and decision making in a pick and place task where an agent must move an object from one location to another and decide whether to act alone or with a partner. Our aims were to identify and model (1) the affordance related dynamics that define an actor's choice to move an object alone or to pass it to their co-actor and (2) the trajectory dynamics of an actor's hand movements when moving to grasp, relocate, or pass the object. Using a virtual reality pick and place task, we demonstrate that both the decision to pass or not pass an object and the movement trajectories of the participants can be characterized in terms of a behavioral dynamics model. Simulations suggest that the proposed behavioral dynamics model exhibits features observed in human participants including hysteresis in decision making, non-straight line trajectories, and non-constant velocity profiles. The proposed model highlights how the same low-dimensional behavioral dynamics can operate to constrain multiple (and often nested) levels of human activity and suggests that knowledge of what, when, where and how to move or act during pick and place behavior may be defined by these low dimensional task dynamics and, thus, can emerge spontaneously and in real-time with little a priori planning. PMID:28701975

  14. Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory

    PubMed Central

    Hollingworth, Andrew; Beck, Valerie M.

    2016-01-01

    Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681

  15. The in situ transverse lamina strength of composite laminates

    NASA Technical Reports Server (NTRS)

    Flaggs, D. L.

    1983-01-01

    The objective of the work reported in this presentation is to determine the in situ transverse strength of a lamina within a composite laminate. From a fracture mechanics standpoint, in situ strength may be viewed as constrained cracking that has been shown to be a function of both lamina thickness and the stiffness of adjacent plies that serve to constrain the cracking process. From an engineering point of view, however, constrained cracking can be perceived as an apparent increase in lamina strength. With the growing need to design more highly loaded composite structures, the concept of in situ strength may prove to be a viable means of increasing the design allowables of current and future composite material systems. A simplified one dimensional analytical model is presented that is used to predict the strain at onset of transverse cracking. While it is accurate only for the most constrained cases, the model is important in that the predicted failure strain is seen to be a function of a lamina's thickness d and of the extensional stiffness bE theta of the adjacent laminae that constrain crack propagation in the 90 deg laminae.

  16. Generating Complaint Motion of Objects with an Articulated Hand

    DTIC Science & Technology

    1985-06-01

    we consider the motions of rigid objects as the solhtions to a con- straint problem. We will examine the task of manipulation in the context of...describe the motion of a rigid object is equivalent to specifying sufficient constraint equations on these unknowns such that they are uniquely...assumption of rigidity . When a rigid object is constrained by a set of contacts, its motion must be consistent with those of the contacts, i.e. its

  17. A Sequential Phase 2b Trial Design for Evaluating Vaccine Efficacy and Immune Correlates for Multiple HIV Vaccine Regimens

    PubMed Central

    Gilbert, Peter B.; Grove, Douglas; Gabriel, Erin; Huang, Ying; Gray, Glenda; Hammer, Scott M.; Buchbinder, Susan P.; Kublin, James; Corey, Lawrence; Self, Steven G.

    2012-01-01

    Five preventative HIV vaccine efficacy trials have been conducted over the last 12 years, all of which evaluated vaccine efficacy (VE) to prevent HIV infection for a single vaccine regimen versus placebo. Now that one of these trials has supported partial VE of a prime-boost vaccine regimen, there is interest in conducting efficacy trials that simultaneously evaluate multiple prime-boost vaccine regimens against a shared placebo group in the same geographic region, for accelerating the pace of vaccine development. This article proposes such a design, which has main objectives (1) to evaluate VE of each regimen versus placebo against HIV exposures occurring near the time of the immunizations; (2) to evaluate durability of VE for each vaccine regimen showing reliable evidence for positive VE; (3) to expeditiously evaluate the immune correlates of protection if any vaccine regimen shows reliable evidence for positive VE; and (4) to compare VE among the vaccine regimens. The design uses sequential monitoring for the events of vaccine harm, non-efficacy, and high efficacy, selected to weed out poor vaccines as rapidly as possible while guarding against prematurely weeding out a vaccine that does not confer efficacy until most of the immunizations are received. The evaluation of the design shows that testing multiple vaccine regimens is important for providing a well-powered assessment of the correlation of vaccine-induced immune responses with HIV infection, and is critically important for providing a reasonably powered assessment of the value of identified correlates as surrogate endpoints for HIV infection. PMID:23181167

  18. Clinical instruments: reliability and validity critical appraisal.

    PubMed

    Brink, Yolandi; Louw, Quinette A

    2012-12-01

    RATIONALE, AIM AND OBJECTIVES: There is a lack of health care practitioners using objective clinical tools with sound psychometric properties. There is also a need for researchers to improve their reporting of the validity and reliability results of these clinical tools. Therefore, to promote the use of valid and reliable tools or tests for clinical evaluation, this paper reports on the development of a critical appraisal tool to assess the psychometric properties of objective clinical tools. A five-step process was followed to develop the new critical appraisal tool: (1) preliminary conceptual decisions; (2) defining key concepts; (3) item generation; (4) assessment of face validity; and (5) formulation of the final tool. The new critical appraisal tool consists of 13 items, of which five items relate to both validity and reliability studies, four items to validity studies only and four items to reliability studies. The 13 items could be scored as 'yes', 'no' or 'not applicable'. This critical appraisal tool will aid both the health care practitioner to critically appraise the relevant literature and researchers to improve the quality of reporting of the validity and reliability of objective clinical tools. © 2011 Blackwell Publishing Ltd.

  19. Comparison in the quality of distractors in three and four options type of multiple choice questions

    PubMed Central

    Rahma, Nourelhouda A A; Shamad, Mahdi M A; Idris, Muawia E A; Elfaki, Omer Abdelgadir; Elfakey, Walyedldin E M; Salih, Karimeldin M A

    2017-01-01

    Introduction The number of distractors needed for high quality multiple choice questions (MCQs) will be determined by many factors. These include firstly whether English language is their mother tongue or a foreign language; secondly whether the instructors who construct the questions are experts or not; thirdly the time spent on constructing the options is also an important factor. It has been observed by Tarrant et al that more time is often spent on constructing questions than on tailoring sound, reliable, and valid distractors. Objectives Firstly, to investigate the effects of reducing the number of options on psychometric properties of the item. Secondly, to determine the frequency of functioning distractors among three or four options in the MCQs examination of the dermatology course in University of Bahri, College of Medicine. Materials and methods This is an experimental study which was performed by means of a dermatology exam, MCQs type. Forty MCQs, with one correct answer for each question were constructed. Two sets of this exam paper were prepared: in the first one, four options were given, including one key answer and three distractors. In the second set, one of the three distractors was deleted randomly, and the sequence of the questions was kept in the same order. Any distracter chosen by less than 5% of the students was regarded as non-functioning. Kuder-Richardson Formula 20 (Kr-20) measures the internal consistency and reliability of an examination with an acceptable range 0.8–1.0. Chi square test was used to compare the distractors in the two exams. Results A significant difference was observed in discrimination and difficulty indexes for both sets of MCQs. More distractors were non-functional for set one (of four options), but slightly more reliable. The reliability (Kr-20) was slightly higher for set one (of four options). The average marks in option three and four were 34.163 and 33.140, respectively. Conclusion Compared to set 1 (four options), set 2 (of three options) was more discriminating and associated with low difficulty index but its reliability was low. PMID:28442942

  20. Language matters: thirteen-month-olds understand that the language a speaker uses constrains conventionality.

    PubMed

    Scott, Jessica C; Henderson, Annette M E

    2013-11-01

    Object labels are valuable communicative tools because their meanings are shared among the members of a particular linguistic community. The current research was conducted to investigate whether 13-month-old infants appreciate that object labels should not be generalized across individuals who have been shown to speak different languages. Using a visual habituation paradigm, Experiment 1 tested whether infants would generalize a new object label that was taught to them by a speaker of a foreign language to a speaker from the infant's own linguistic group. The results suggest that infants do not expect 2 individuals who have been shown to speak different languages to use the same label to refer to the same object. The results of Experiment 2 reveal that infants do not generalize a new object label that was taught to them by a speaker of their native language to an individual who had been shown to speak a foreign language. These findings offer the first evidence that by the end of the 1st year of life, infants are sensitive to the fact that the conventional nature of language is constrained by the language that a person has been shown to speak.

Top