Sample records for solving functional reliability

  1. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  2. Solving Nonlinear Fractional Differential Equation by Generalized Mittag-Leffler Function Method

    NASA Astrophysics Data System (ADS)

    Arafa, A. A. M.; Rida, S. Z.; Mohammadein, A. A.; Ali, H. M.

    2013-06-01

    In this paper, we use Mittag—Leffler function method for solving some nonlinear fractional differential equations. A new solution is constructed in power series. The fractional derivatives are described by Caputo's sense. To illustrate the reliability of the method, some examples are provided.

  3. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  4. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  5. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  6. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  7. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  9. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  10. The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff

    2004-01-01

    The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…

  11. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  12. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  13. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  14. The functional implications of motor, cognitive, psychiatric, and social problem-solving states in Huntington's disease.

    PubMed

    Van Liew, Charles; Gluhm, Shea; Goldstein, Jody; Cronan, Terry A; Corey-Bloom, Jody

    2013-01-01

    Huntington's disease (HD) is a genetic, neurodegenerative disorder characterized by motor, cognitive, and psychiatric dysfunction. In HD, the inability to solve problems successfully affects not only disease coping, but also interpersonal relationships, judgment, and independent living. The aim of the present study was to examine social problem-solving (SPS) in well-characterized HD and at-risk (AR) individuals and to examine its unique and conjoint effects with motor, cognitive, and psychiatric states on functional ratings. Sixty-three participants, 31 HD and 32 gene-positive AR, were included in the study. Participants completed the Social Problem-Solving Inventory-Revised: Long (SPSI-R:L), a 52-item, reliable, standardized measure of SPS. Items are aggregated under five scales (Positive, Negative, and Rational Problem-Solving; Impulsivity/Carelessness and Avoidance Styles). Participants also completed the Unified Huntington's Disease Rating Scale functional, behavioral, and cognitive assessments, as well as additional neuropsychological examinations and the Symptom Checklist-90-Revised (SCL-90R). A structural equation model was used to examine the effects of motor, cognitive, psychiatric, and SPS states on functionality. The multifactor structural model fit well descriptively. Cognitive and motor states uniquely and significantly predicted function in HD; however, neither psychiatric nor SPS states did. SPS was, however, significantly related to motor, cognitive, and psychiatric states, suggesting that it may bridge the correlative gap between psychiatric and cognitive states in HD. SPS may be worth assessing in conjunction with the standard gamut of clinical assessments in HD. Suggestions for future research and implications for patients, families, caregivers, and clinicians are discussed.

  15. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    NASA Astrophysics Data System (ADS)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  16. Reliability and Validity of a Procedure to Measure Diagnostic Reasoning and Problem-Solving Skills Taught in Predoctoral Orthodontic Education.

    ERIC Educational Resources Information Center

    Albanese, Mark A.; Jacobs, Richard M.

    1990-01-01

    The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…

  17. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    PubMed

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  18. UP TO 100,000 RELIABLE STRONG GRAVITATIONAL LENSES IN FUTURE DARK ENERGY EXPERIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serjeant, S.

    2014-09-20

    The Euclid space telescope will observe ∼10{sup 5} strong galaxy-galaxy gravitational lens events in its wide field imaging survey over around half the sky, but identifying the gravitational lenses from their observed morphologies requires solving the difficult problem of reliably separating the lensed sources from contaminant populations, such as tidal tails, as well as presenting challenges for spectroscopic follow-up redshift campaigns. Here I present alternative selection techniques for strong gravitational lenses in both Euclid and the Square Kilometre Array, exploiting the strong magnification bias present in the steep end of the Hα luminosity function and the H I mass function.more » Around 10{sup 3} strong lensing events are detectable with this method in the Euclid wide survey. While only ∼1% of the total haul of Euclid lenses, this sample has ∼100% reliability, known source redshifts, high signal-to-noise, and a magnification-based selection independent of assumptions of lens morphology. With the proposed Square Kilometre Array dark energy survey, the numbers of reliable strong gravitational lenses with source redshifts can reach 10{sup 5}.« less

  19. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  20. Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.

    PubMed

    Mall, Susmita; Chakraverty, S

    2016-08-01

    Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.

  1. A new statistical framework to assess structural alignment quality using information compression

    PubMed Central

    Collier, James H.; Allison, Lloyd; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.

    2014-01-01

    Motivation: Progress in protein biology depends on the reliability of results from a handful of computational techniques, structural alignments being one. Recent reviews have highlighted substantial inconsistencies and differences between alignment results generated by the ever-growing stock of structural alignment programs. The lack of consensus on how the quality of structural alignments must be assessed has been identified as the main cause for the observed differences. Current methods assess structural alignment quality by constructing a scoring function that attempts to balance conflicting criteria, mainly alignment coverage and fidelity of structures under superposition. This traditional approach to measuring alignment quality, the subject of considerable literature, has failed to solve the problem. Further development along the same lines is unlikely to rectify the current deficiencies in the field. Results: This paper proposes a new statistical framework to assess structural alignment quality and significance based on lossless information compression. This is a radical departure from the traditional approach of formulating scoring functions. It links the structural alignment problem to the general class of statistical inductive inference problems, solved using the information-theoretic criterion of minimum message length. Based on this, we developed an efficient and reliable measure of structural alignment quality, I-value. The performance of I-value is demonstrated in comparison with a number of popular scoring functions, on a large collection of competing alignments. Our analysis shows that I-value provides a rigorous and reliable quantification of structural alignment quality, addressing a major gap in the field. Availability: http://lcb.infotech.monash.edu.au/I-value Contact: arun.konagurthu@monash.edu Supplementary information: Online supplementary data are available at http://lcb.infotech.monash.edu.au/I-value/suppl.html PMID:25161241

  2. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  3. Clinical characterization of 2D pressure field in human left ventricles

    NASA Astrophysics Data System (ADS)

    Borja, Maria; Rossini, Lorenzo; Martinez-Legazpi, Pablo; Benito, Yolanda; Alhama, Marta; Yotti, Raquel; Perez Del Villar, Candelas; Gonzalez-Mansilla, Ana; Barrio, Alicia; Fernandez-Aviles, Francisco; Bermejo, Javier; Khan, Andrew; Del Alamo, Juan Carlos

    2014-11-01

    The evaluation of left ventricle (LV) function in the clinical setting remains a challenge. Pressure gradient is a reliable and reproducible indicator of the LV function. We obtain 2D relative pressure field in the LV using in-vivo measurements obtained by processing Doppler-echocardiography images of healthy and dilated hearts. Exploiting mass conservation, we solve the Poisson pressure equation (PPE) dropping the time derivatives and viscous terms. The flow acceleration appears only in the boundary conditions, making our method weakly sensible to the time resolution of in-vivo acquisitions. To ensure continuity with respect to the discrete operator and grid used, a potential flow correction is applied beforehand, which gives another Poisson equation. The new incompressible velocity field ensures that the compatibility equation for the PPE is satisfied. Both Poisson equations are efficiently solved on a Cartesian grid using a multi-grid method and immersed boundary for the LV wall. The whole process is computationally inexpensive and could play a diagnostic role in the clinical assessment of LV function.

  4. System principles, mathematical models and methods to ensure high reliability of safety systems

    NASA Astrophysics Data System (ADS)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  5. A globally convergent LCL method for nonlinear optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedlander, M. P.; Saunders, M. A.; Mathematics and Computer Science

    2005-01-01

    For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form 'minimize an augmented Lagrangian function subject to linearized constraints.' Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an optionmore » to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.« less

  6. Screening of Cognitive Impairment in Schizophrenia: Reliability, Sensitivity, and Specificity of the Repeatable Battery for the Assessment of Neuropsychological Status in a Spanish Sample.

    PubMed

    De la Torre, Gabriel G; Perez, Maria J; Ramallo, Miguel A; Randolph, Christopher; González-Villegas, Macarena Bernal

    2016-04-01

    In recent years, a number of studies focusing on the evaluation of neuropsychological deficits in individuals with schizophrenia have shown deficits that include several cognitive functions. Attention deficits as well as memory or executive function deficits are common in this kind of disorder together with sustained attention problems, working memory deficiencies, and problem-solving difficulties, among many others. Currently, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is gaining special importance in the evaluation of the cognitive deficits associated with schizophrenia. In this article, we describe an RBANS screening in a sample of 88 Spanish patients diagnosed with schizophrenia. We also aimed to check the battery's reliability, sensitivity, and specificity in the studied sample. We performed a comparative study with 88 healthy participants. The results showed a reliability index value of α = .795 and an item value of α = .762. For total test reliability, we obtained an index value of α = .761 and an item value of α = .762. Sensitivity score was 87.5% and specificity 86.4%. RBANS obtained good reliability, sensitivity, and specificity scores and represents a good screening tool in detecting cognitive deficits associated with schizophrenia. © The Author(s) 2015.

  7. Reliability and Validity of a Procedure To Measure Diagnostic Reasoning and Problem-Solving Skills Taught in Predoctoral Orthodontic Education.

    ERIC Educational Resources Information Center

    Albanese, Mark A.; Jacobs, Richard M.

    Preliminary psychometric data assessing the reliability and validity of a method used to measure the diagnostic reasoning and problem-solving skills of predoctoral students in orthodontia are described. The measurement approach consisted of sets of patient demographic data and dental photos and x-rays, accompanied by a set of 33 multiple-choice…

  8. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Power System Reliability Assessment by Analysing Voltage Dips on the Blue Horizon Bay 22KV Overhead Line in the Nelson Mandela Bay Municipality

    NASA Astrophysics Data System (ADS)

    Lamour, B. G.; Harris, R. T.; Roberts, A. G.

    2010-06-01

    Power system reliability problems are very difficult to solve because the power systems are complex and geographically widely distributed and influenced by numerous unexpected events. It is therefore imperative to employ the most efficient optimization methods in solving the problems relating to reliability of the power system. This paper presents a reliability analysis and study of the power interruptions resulting from severe power outages in the Nelson Mandela Bay Municipality (NMBM), South Africa and includes an overview of the important factors influencing reliability, and methods to improve the reliability. The Blue Horizon Bay 22 kV overhead line, supplying a 6.6 kV residential sector has been selected. It has been established that 70% of the outages, recorded at the source, originate on this feeder.

  10. FAST: a framework for simulation and analysis of large-scale protein-silicon biosensor circuits.

    PubMed

    Gu, Ming; Chakrabartty, Shantanu

    2013-08-01

    This paper presents a computer aided design (CAD) framework for verification and reliability analysis of protein-silicon hybrid circuits used in biosensors. It is envisioned that similar to integrated circuit (IC) CAD design tools, the proposed framework will be useful for system level optimization of biosensors and for discovery of new sensing modalities without resorting to laborious fabrication and experimental procedures. The framework referred to as FAST analyzes protein-based circuits by solving inverse problems involving stochastic functional elements that admit non-linear relationships between different circuit variables. In this regard, FAST uses a factor-graph netlist as a user interface and solving the inverse problem entails passing messages/signals between the internal nodes of the netlist. Stochastic analysis techniques like density evolution are used to understand the dynamics of the circuit and estimate the reliability of the solution. As an example, we present a complete design flow using FAST for synthesis, analysis and verification of our previously reported conductometric immunoassay that uses antibody-based circuits to implement forward error-correction (FEC).

  11. Advances in Homology Protein Structure Modeling

    PubMed Central

    Xiang, Zhexin

    2007-01-01

    Homology modeling plays a central role in determining protein structure in the structural genomics project. The importance of homology modeling has been steadily increasing because of the large gap that exists between the overwhelming number of available protein sequences and experimentally solved protein structures, and also, more importantly, because of the increasing reliability and accuracy of the method. In fact, a protein sequence with over 30% identity to a known structure can often be predicted with an accuracy equivalent to a low-resolution X-ray structure. The recent advances in homology modeling, especially in detecting distant homologues, aligning sequences with template structures, modeling of loops and side chains, as well as detecting errors in a model, have contributed to reliable prediction of protein structure, which was not possible even several years ago. The ongoing efforts in solving protein structures, which can be time-consuming and often difficult, will continue to spur the development of a host of new computational methods that can fill in the gap and further contribute to understanding the relationship between protein structure and function. PMID:16787261

  12. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    NASA Astrophysics Data System (ADS)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  13. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  14. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    PubMed

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  16. Physical activity problem-solving inventory for adolescents: Development and initial validation

    USDA-ARS?s Scientific Manuscript database

    Youth encounter physical activity barriers, often called problems. The purpose of problem-solving is to generate solutions to overcome the barriers. Enhancing problem-solving ability may enable youth to be more physically active. Therefore, a method for reliably assessing physical activity problem-s...

  17. Psychometric properties of the Social Problem Solving Inventory-Revised Short-Form in a South African population.

    PubMed

    Sorsdahl, Katherine; Stein, Dan J; Myers, Bronwyn

    2017-04-01

    The Social Problem Solving Inventory-Revised Short-Form (SPSI-R:SF) has been used in several countries to identify problem-solving deficits among clinical and general populations in order to guide cognitive-behavioural interventions. Yet, very few studies have evaluated its psychometric properties. Three language versions of the questionnaire were administered to a general population sample comprising 1000 participants (771 English-, 178 Afrikaans- and 101 Xhosa-speakers). Of these participants, 210 were randomly selected to establish test-retest reliability (70 in each language). Principal component analysis was performed to examine the applicability of the factor structure of the original questionnaire to the South African data. Supplementary psychometric analyses were performed, including internal consistency and test-retest reliability. Collectively, results provide initial evidence of the reliability and validity of the SPSI-R:SF for the assessment of problem solving deficits in South Africa. Further studies that explore how the Afrikaans language version of the SPSI-R:SF can be improved and that establish the predictive validity of scores on the SPSI-R:SF are needed. © 2015 International Union of Psychological Science.

  18. On the Effectiveness of Nature-Inspired Metaheuristic Algorithms for Performing Phase Equilibrium Thermodynamic Calculations

    PubMed Central

    Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430

  19. On the effectiveness of nature-inspired metaheuristic algorithms for performing phase equilibrium thermodynamic calculations.

    PubMed

    Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.

  20. Reliable Multi Method Assessment of Metacognition Use in Chemistry Problem Solving

    ERIC Educational Resources Information Center

    Cooper, Melanie M.; Sandi-Urena, Santiago; Stevens, Ron

    2008-01-01

    Metacognition is fundamental in achieving understanding of chemistry and developing of problem solving skills. This paper describes an across-method-and-time instrument designed to assess the use of metacognition in chemistry problem solving. This multi method instrument combines a self report, namely the Metacognitive Activities Inventory…

  1. Solving Quantum Ground-State Problems with Nuclear Magnetic Resonance

    PubMed Central

    Li, Zhaokai; Yung, Man-Hong; Chen, Hongwei; Lu, Dawei; Whitfield, James D.; Peng, Xinhua; Aspuru-Guzik, Alán; Du, Jiangfeng

    2011-01-01

    Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10−5 decimal digit). The ground-state fidelity was distilled to be more than 80%, and the singlet-to-triplet switching near the critical field is reliably captured. This result shows that quantum simulators can better leverage classical trial wave functions than classical computers PMID:22355607

  2. Biology doesn't waste energy: that's really smart

    NASA Astrophysics Data System (ADS)

    Vincent, Julian F. V.; Bogatyreva, Olga; Bogatyrev, Nikolaj

    2006-03-01

    Biology presents us with answers to design problems that we suspect would be very useful if only we could implement them successfully. We use the Russian theory of problem solving - TRIZ - in a novel way to provide a system for analysis and technology transfer. The analysis shows that whereas technology uses energy as the main means of solving technical problems, biology uses information and structure. Biology is also strongly hierarchical. The suggestion is that smart technology in hierarchical structures can help us to design much more efficient technology. TRIZ also suggests that biological design is autonomous and can be defined by the prefix "self-" with any function. This autonomy extends to the control system, so that the sensor is commonly also the actuator, resulting in simpler systems and greater reliability.

  3. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  4. Comprehensive drought characteristics analysis based on a nonlinear multivariate drought index

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Chang, Jianxia; Wang, Yimin; Li, Yunyun; Hu, Hui; Chen, Yutong; Huang, Qiang; Yao, Jun

    2018-02-01

    It is vital to identify drought events and to evaluate multivariate drought characteristics based on a composite drought index for better drought risk assessment and sustainable development of water resources. However, most composite drought indices are constructed by the linear combination, principal component analysis and entropy weight method assuming a linear relationship among different drought indices. In this study, the multidimensional copulas function was applied to construct a nonlinear multivariate drought index (NMDI) to solve the complicated and nonlinear relationship due to its dependence structure and flexibility. The NMDI was constructed by combining meteorological, hydrological, and agricultural variables (precipitation, runoff, and soil moisture) to better reflect the multivariate variables simultaneously. Based on the constructed NMDI and runs theory, drought events for a particular area regarding three drought characteristics: duration, peak, and severity were identified. Finally, multivariate drought risk was analyzed as a tool for providing reliable support in drought decision-making. The results indicate that: (1) multidimensional copulas can effectively solve the complicated and nonlinear relationship among multivariate variables; (2) compared with single and other composite drought indices, the NMDI is slightly more sensitive in capturing recorded drought events; and (3) drought risk shows a spatial variation; out of the five partitions studied, the Jing River Basin as well as the upstream and midstream of the Wei River Basin are characterized by a higher multivariate drought risk. In general, multidimensional copulas provides a reliable way to solve the nonlinear relationship when constructing a comprehensive drought index and evaluating multivariate drought characteristics.

  5. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  6. The development and testing of a qualitative instrument designed to assess critical thinking

    NASA Astrophysics Data System (ADS)

    Clauson, Cynthia Louisa

    This study examined a qualitative approach to assess critical thinking. An instrument was developed that incorporates an assessment process based on Dewey's (1933) concepts of self-reflection and critical thinking as problem solving. The study was designed to pilot test the critical thinking assessment process with writing samples collected from a heterogeneous group of students. The pilot test included two phases. Phase 1 was designed to determine the validity and inter-rater reliability of the instrument using two experts in critical thinking, problem solving, and literacy development. Validity of the instrument was addressed by requesting both experts to respond to ten questions in an interview. The inter-rater reliability was assessed by analyzing the consistency of the two experts' scorings of the 20 writing samples to each other, as well as to my scoring of the same 20 writing samples. Statistical analyses included the Spearman Rho and the Kuder-Richardson (Formula 20). Phase 2 was designed to determine the validity and reliability of the critical thinking assessment process with seven science teachers. Validity was addressed by requesting the teachers to respond to ten questions in a survey and interview. Inter-rater reliability was addressed by comparing the seven teachers' scoring of five writing samples with my scoring of the same five writing samples. Again, the Spearman Rho and the Kuder-Richardson (Formula 20) were used to determine the inter-rater reliability. The validity results suggest that the instrument is helpful as a guide for instruction and provides a systematic method to teach and assess critical thinking while problem solving with students in the classroom. The reliability results show the critical thinking assessment instrument to possess fairly high reliability when used by the experts, but weak reliability when used by classroom teachers. A major conclusion was drawn that teachers, as well as students, would need to receive instruction in critical thinking and in how to use the assessment process in order to gain more consistent interpretations of the six problem-solving steps. Specific changes needing to be made in the instrument to improve the quality are included.

  7. Gauging the gaps in student problem-solving skills: assessment of individual and group use of problem-solving strategies using online discussions.

    PubMed

    Anderson, William L; Mitchell, Steven M; Osgood, Marcy P

    2008-01-01

    For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted help to groups and to individual students. This method of assessing performance has a high interrater reliability, and senior students, with training, can serve as reliable graders. We have been able to measure improvements in many students' problem-solving strategies, but, not unexpectedly, there is a population of students who consistently apply the same failing strategy when there is no faculty intervention. This new methodology provides an effective tool to direct faculty to constructively intercede in this area of student development.

  8. An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems

    NASA Astrophysics Data System (ADS)

    Davey, A.

    1983-08-01

    A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.

  9. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  10. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.

  11. The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Schmuck, Frank Bernhard

    1988-01-01

    Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.

  12. Development and validation of a physics problem-solving assessment rubric

    NASA Astrophysics Data System (ADS)

    Docktor, Jennifer Lynn

    Problem solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving throughout the educational system, there is no standard way to evaluate written problem solving that is valid, reliable, and easy to use. Most tests of problem solving performance given in the classroom focus on the correctness of the end result or partial results rather than the quality of the procedures and reasoning leading to the result, which gives an inadequate description of a student's skills. A more detailed and meaningful measure is necessary if different curricular materials or pedagogies are to be compared. This measurement tool could also allow instructors to diagnose student difficulties and focus their coaching. It is important that the instrument be applicable to any problem solving format used by a student and to a range of problem types and topics typically used by instructors. Typically complex processes such as problem solving are assessed by using a rubric, which divides a skill into multiple quasi-independent categories and defines criteria to attain a score in each. This dissertation describes the development of a problem solving rubric for the purpose of assessing written solutions to physics problems and presents evidence for the validity, reliability, and utility of score interpretations on the instrument.

  13. Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network

    NASA Astrophysics Data System (ADS)

    Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser

    2018-02-01

    The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.

  14. Preliminary study of fusion reactor: Solution of Grad Shapranov equation

    NASA Astrophysics Data System (ADS)

    Setiawan, Y.; Fermi, N.; Su'ud, Z.

    2012-06-01

    Nuclear fussion is prospective energy sources for the future due to the abundance of the fuel and can be categorized and clean energy sources. The problem is how to contain very hot plasma of temperature few hundreed million degrees safety and reliably. Tokamax type fussion reactors is considered as the most prospective concept. To analyze the plasma confining process and its movement Grad-Shavranov equation must be solved. This paper discuss about solution of Grad-Shavranov equation using Whittaker function. The formulation is then applied to the ITER design and example.

  15. Analytical studies on the Benney-Luke equation in mathematical physics

    NASA Astrophysics Data System (ADS)

    Islam, S. M. Rayhanul; Khan, Kamruzzaman; Woadud, K. M. Abdul Al

    2018-04-01

    The enhanced (G‧/G)-expansion method presents wide applicability to handling nonlinear wave equations. In this article, we find the new exact traveling wave solutions of the Benney-Luke equation by using the enhanced (G‧/G)-expansion method. This method is a useful, reliable, and concise method to easily solve the nonlinear evaluation equations (NLEEs). The traveling wave solutions have expressed in term of the hyperbolic and trigonometric functions. We also have plotted the 2D and 3D graphics of some analytical solutions obtained in this paper.

  16. The meshless local Petrov-Galerkin method based on moving Kriging interpolation for solving the time fractional Navier-Stokes equations.

    PubMed

    Thamareerat, N; Luadsong, A; Aschariyaphotha, N

    2016-01-01

    In this paper, we present a numerical scheme used to solve the nonlinear time fractional Navier-Stokes equations in two dimensions. We first employ the meshless local Petrov-Galerkin (MLPG) method based on a local weak formulation to form the system of discretized equations and then we will approximate the time fractional derivative interpreted in the sense of Caputo by a simple quadrature formula. The moving Kriging interpolation which possesses the Kronecker delta property is applied to construct shape functions. This research aims to extend and develop further the applicability of the truly MLPG method to the generalized incompressible Navier-Stokes equations. Two numerical examples are provided to illustrate the accuracy and efficiency of the proposed algorithm. Very good agreement between the numerically and analytically computed solutions can be observed in the verification. The present MLPG method has proved its efficiency and reliability for solving the two-dimensional time fractional Navier-Stokes equations arising in fluid dynamics as well as several other problems in science and engineering.

  17. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  18. General theory for calculating disorder-averaged Green's function correlators within the coherent potential approximation

    NASA Astrophysics Data System (ADS)

    Zhou, Chenyi; Guo, Hong

    2017-01-01

    We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.

  19. Psychometric Evaluation of the Social Problem-Solving Inventory-Revised among Overweight or Obese Adults

    ERIC Educational Resources Information Center

    Wang, Jing; Matthews, Judith T.; Sereika, Susan M.; Chasens, Eileen R.; Ewing, Linda J.; Burke, Lora E.

    2013-01-01

    Problem solving is a key component of weight loss programs. The Social Problem Solving Inventory-Revised (SPSI-R) has not been evaluated in weight loss studies. The purpose of this study was to evaluate the psychometrics of the SPSI-R. Cronbach's a (0.95 for total score; 0.67-0.92 for subscales) confirmed internal consistency reliability. The…

  20. Measurement of phase function of aerosol at different altitudes by CCD Lidar

    NASA Astrophysics Data System (ADS)

    Sun, Peiyu; Yuan, Ke'e.; Yang, Jie; Hu, Shunxing

    2018-02-01

    The aerosols near the ground are closely related to human health and climate change, the study on which has important significance. As we all know, the aerosol is inhomogeneous at different altitudes, of which the phase function is also different. In order to simplify the retrieval algorithm, it is usually assumed that the aerosol is uniform at different altitudes, which will bring measurement error. In this work, an experimental approach is demonstrated to measure the scattering phase function of atmospheric aerosol particles at different heights by CCD lidar system, which could solve the problem of the traditional CCD lidar system in assumption of phase function. The phase functions obtained by the new experimental approach are used to retrieve the aerosol extinction coefficient profiles. By comparison of the aerosol extinction coefficient retrieved by Mie-scattering aerosol lidar and CCD lidar at night, the reliability of new experimental approach is verified.

  1. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  2. The interplay between screening properties and colloid anisotropy: towards a reliable pair potential for disc-like charged particles.

    PubMed

    Agra, R; Trizac, E; Bocquet, L

    2004-12-01

    The electrostatic potential of a highly charged disc (clay platelet) in an electrolyte is investigated in detail. The corresponding non-linear Poisson-Boltzmann (PB) equation is solved numerically, and we show that the far-field behaviour (relevant for colloidal interactions in dilute suspensions) is exactly that obtained within linearized PB theory, with the surface boundary condition of a uniform potential. The latter linear problem is solved by a new semi-analytical procedure and both the potential amplitude (quantified by an effective charge) and potential anisotropy coincide closely within PB and linearized PB, provided the disc bare charge is high enough. This anisotropy remains at all scales; it is encoded in a function that may vary over several orders of magnitude depending on the azimuthal angle under which the disc is seen. The results allow to construct a pair potential for discs interaction, that is strongly orientation dependent.

  3. Problem Solving with Guided Repeated Oral Reading Instruction

    ERIC Educational Resources Information Center

    Conderman, Greg; Strobel, Debra

    2006-01-01

    Many students with disabilities require specialized instructional interventions and frequent progress monitoring in reading. The guided repeated oral reading technique promotes oral reading fluency while providing a reliable data-based monitoring system. This article emphasizes the importance of problem-solving when using this reading approach.

  4. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  5. Problem Variables that Promote Incubation Effects

    ERIC Educational Resources Information Center

    Penney, Catherine G.; Godsell, Annette; Scott, Annette; Balsom, Rod

    2004-01-01

    Three studies sought to determine whether incubation effects could be reliably generated in a problem-solving task. Experimental variables manipulated were the duration of the interval between two problem-solving opportunities and the activity performed by the problem solvers during the interval. A multisolution anagram task was used which…

  6. Problem Solving in Biology: A Methodology

    ERIC Educational Resources Information Center

    Wisehart, Gary; Mandell, Mark

    2008-01-01

    A methodology is described that teaches science process by combining informal logic and a heuristic for rating factual reliability. This system facilitates student hypothesis formation, testing, and evaluation of results. After problem solving with this scheme, students are asked to examine and evaluate arguments for the underlying principles of…

  7. The puzzle box as a simple and efficient behavioral test for exploring impairments of general cognition and executive functions in mouse models of schizophrenia.

    PubMed

    Ben Abdallah, Nada M-B; Fuss, Johannes; Trusel, Massimo; Galsworthy, Michael J; Bobsin, Kristin; Colacicco, Giovanni; Deacon, Robert M J; Riva, Marco A; Kellendonk, Christoph; Sprengel, Rolf; Lipp, Hans-Peter; Gass, Peter

    2011-01-01

    Deficits in executive functions are key features of schizophrenia. Rodent behavioral paradigms used so far to find animal correlates of such deficits require extensive effort and time. The puzzle box is a problem-solving test in which mice are required to complete escape tasks of increasing difficulty within a limited amount of time. Previous data have indicated that it is a quick but highly reliable test of higher-order cognitive functioning. We evaluated the use of the puzzle box to explore executive functioning in five different mouse models of schizophrenia: mice with prefrontal cortex and hippocampus lesions, mice treated sub-chronically with the NMDA-receptor antagonist MK-801, mice constitutively lacking the GluA1 subunit of AMPA-receptors, and mice over-expressing dopamine D2 receptors in the striatum. All mice displayed altered executive functions in the puzzle box, although the nature and extent of the deficits varied between the different models. Deficits were strongest in hippocampus-lesioned and GluA1 knockout mice, while more subtle deficits but specific to problem solving were found in the medial prefrontal-lesioned mice, MK-801-treated mice, and in mice with striatal overexpression of D2 receptors. Data from this study demonstrate the utility of the puzzle box as an effective screening tool for executive functions in general and for schizophrenia mouse models in particular. Published by Elsevier Inc.

  8. The psychometric validation of the Social Problem-Solving Inventory--Revised with UK incarcerated sexual offenders.

    PubMed

    Wakeling, Helen C

    2007-09-01

    This study examined the reliability and validity of the Social Problem-Solving Inventory--Revised (SPSI-R; D'Zurilla, Nezu, & Maydeu-Olivares, 2002) with a population of incarcerated sexual offenders. An availability sample of 499 adult male sexual offenders was used. The SPSI-R had good reliability measured by internal consistency and test-retest reliability, and adequate validity. Construct validity was determined via factor analysis. An exploratory factor analysis extracted a two-factor model. This model was then tested against the theory-driven five-factor model using confirmatory factor analysis. The five-factor model was selected as the better fitting of the two, and confirmed the model according to social problem-solving theory (D'Zurilla & Nezu, 1982). The SPSI-R had good convergent validity; significant correlations were found between SPSI-R subscales and measures of self-esteem, impulsivity, and locus of control. SPSI-R subscales were however found to significantly correlate with a measure of socially desirable responding. This finding is discussed in relation to recent research suggesting that impression management may not invalidate self-report measures (e.g. Mills & Kroner, 2005). The SPSI-R was sensitive to sexual offender intervention, with problem-solving improving pre to post-treatment in both rapists and child molesters. The study concludes that the SPSI-R is a reasonably internally valid and appropriate tool to assess problem-solving in sexual offenders. However future research should cross-validate the SPSI-R with other behavioural outcomes to examine the external validity of the measure. Furthermore, future research should utilise a control group to determine treatment impact.

  9. Development of a taxonomy of behaviour change techniques used in individual behavioural support for smoking cessation.

    PubMed

    Michie, Susan; Hyder, Natasha; Walia, Asha; West, Robert

    2011-04-01

    Individual behavioural support for smoking cessation is effective but little is known about the 'active ingredients'. As a first step to establishing this, it is essential to have a consistent terminology for specifying intervention content. This study aimed to develop for the first time a reliable taxonomy of behaviour change techniques (BCTs) used within individual behavioural support for smoking cessation. Two source documents describing recommended practice were identified and analysed by two coders into component BCTs. The resulting taxonomy of BCTs was applied to 43 treatment manuals obtained from the English Stop Smoking Services (SSSs). In the first 28 of these, pairs of coders applied the taxonomy independently and inter-coder reliability was assessed. The BCTs were also categorised by two coders according to their main function and inter-coder reliability for this was assessed. Forty-three BCTs were identified which could be classified into four functions: 1) directly addressing motivation e.g. providing rewards contingent on abstinence, 2) maximising self-regulatory capacity or skills e.g. facilitating barrier identification and problem solving, 3) promoting adjuvant activities e.g. advising on stop-smoking medication, and 4) supporting other BCTs e.g. building general rapport. Percentage agreement in identifying BCTs and of categorising BCTs into their functions ranged from 86% to 95% and discrepancies were readily resolved through discussion. It is possible to develop a reliable taxonomy of BCTs used in behavioural support for smoking cessation which can provide a starting point for investigating the association between intervention content and outcome and can form a basis for determining competences required to undertake the role of stop smoking specialist. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  11. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    NASA Astrophysics Data System (ADS)

    Zhu, Z. W.; Zhang, W. D.; Xu, J.

    2014-03-01

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.

  12. A Survey on Data Quality for Dependable Monitoring in Wireless Sensor Networks.

    PubMed

    Jesus, Gonçalo; Casimiro, António; Oliveira, Anabela

    2017-09-02

    Wireless sensor networks are being increasingly used in several application areas, particularly to collect data and monitor physical processes. Non-functional requirements, like reliability, security or availability, are often important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provide a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data are reliable or, more generically, that they have the necessary quality. In this survey, we look into the problem of ensuring the desired quality of data for dependable monitoring using WSNs. We take a dependability-oriented perspective, reviewing the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, we give particular attention to understanding which faults can affect sensors, how they can affect the quality of the information and how this quality can be improved and quantified.

  13. Cross-cultural adaptation of a pre-school screening instrument: comparison of Korean and US populations.

    PubMed

    Heo, K H; Squires, J; Yovanoff, P

    2008-03-01

    Accurate and efficient developmental screening measures are critical for early identification of developmental problems; however, few reliable and valid tests are available in Korea as well as other countries outside the USA. The Ages and Stages Questionnaires (ASQ) was chosen for study with young children in Korea. The ASQ was translated into Korean and necessary cross-cultural adaptations were made. The translated version was then distributed and completed by 3220 parents of young children between the ages of 4 months and 5 years. Reliability was studied including domain correlations, internal consistency, and performance of identification cut-off scores for the Korean population. Rasch analyses including tests of Differential Item Functioning, contrasting Korean and US samples were also performed. In general, internal consistency of the Korean ASQ was high, with overall correlations 0.75 for communication, 0.85 for gross motor, 0.74 for fine motor, 0.72 for problem solving, and 0.65 for personal-social. Validity, including concurrent validity, also had strong evidence. Mean scores of children on the Korean translation of the ASQ and the US normative sample were generally similar. Rasch analyses indicated the majority of items functioned similarly across the Korean sample. In general, the ASQ was translated with cultural appropriateness in mind and functioned as a valid and reliable parent-completed screening test to assist in early identification of young children with developmental delays. Further research is needed to confirm these results with a larger and more diverse Korean sample.

  14. Aptitude-treatment interactions revisited: effect of metacognitive intervention on subtypes of written expression in elementary school students.

    PubMed

    Hooper, Stephen R; Wakely, Melissa B; de Kruif, Renee E L; Swartz, Carl W

    2006-01-01

    We examined the effectiveness of a metacognitive intervention for written language performance, based on the Hayes model of written expression, for 73 fourth-grade (n = 38) and fifth-grade (n = 35) students. The intervention consisted of twenty 45-min writing lessons designed to improve their awareness of writing as a problem-solving process. Each of the lessons addressed some aspect of planning, translating, and reflecting on written products; their self-regulation of these processes; and actual writing practice. All instruction was conducted in intact classrooms. Prior to the intervention, all students received a battery of neurocognitive tests measuring executive functions, attention, and language. In addition, preintervention writing samples were obtained and analyzed holistically and for errors in syntax, semantics, and spelling. Following the intervention, the writing tasks were readministered and cluster analysis of the neurocognitive data was conducted. Cluster analytic procedures yielded 7 reliable clusters: 4 normal variants, 1 Problem Solving weakness, 1 Problem Solving Language weaknesses, and 1 Problem Solving strength. The response to the single treatment by these various subtypes revealed positive but modest findings. Significant group differences were noted for improvement in syntax errors and spelling, with only spelling showing differential improvement for the Problem Solving Language subtype. In addition, there was a marginally significant group effect for holistic ratings. These findings provide initial evidence that Writing Aptitude (subtype) x Single Treatment interactions exist in writing, but further research is needed with other classification schemes and interventions.

  15. New solitary wave solutions of (3 + 1)-dimensional nonlinear extended Zakharov-Kuznetsov and modified KdV-Zakharov-Kuznetsov equations and their applications

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, A. R.; Arshad, M.; Wang, Jun

    In this paper, new exact solitary wave, soliton and elliptic function solutions are constructed in various forms of three dimensional nonlinear partial differential equations (PDEs) in mathematical physics by utilizing modified extended direct algebraic method. Soliton solutions in different forms such as bell and anti-bell periodic, dark soliton, bright soliton, bright and dark solitary wave in periodic form etc are obtained, which have large applications in different branches of physics and other areas of applied sciences. The obtained solutions are also presented graphically. Furthermore, many other nonlinear evolution equations arising in mathematical physics and engineering can also be solved by this powerful, reliable and capable method. The nonlinear three dimensional extended Zakharov-Kuznetsov dynamica equation and (3 + 1)-dimensional modified KdV-Zakharov-Kuznetsov equation are selected to show the reliability and effectiveness of the current method.

  16. Reliability and validity of a videotape method to describe expressive behavior in persons with Parkinson's disease.

    PubMed

    Lyons, Kathleen Doyle; Tickle-Degnen, Linda

    2005-01-01

    The ability to effectively communicate thoughts, feelings, and identity to others is an important aspect of occupational performance. The symptoms of Parkinson's disease can impair a person's ability to verbally and non-verbally communicate with others. In order to better understand issues of communication functioning for this population, research tools to describe expressive and communicative behavior during occupation and social interaction are needed. In this study, six persons with Parkinson's disease participated in individual, videotaped interviews focused on problem solving during daily activities. Three trained graduate students viewed edited clips from the videotapes and completed a rating scale of expressive behavior designed by the authors. Data support the reliability and construct validity of the behavioral rating scale, suggesting that measures of expressive behavior of persons with Parkinson's disease can be effectively derived using short segments of videotaped activity.

  17. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  18. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  19. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  20. Exploiting replication in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, T. A.

    1989-01-01

    Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.

  1. Free vibration of functionally graded beams and frameworks using the dynamic stiffness method

    NASA Astrophysics Data System (ADS)

    Banerjee, J. R.; Ananthapuvirajah, A.

    2018-05-01

    The free vibration analysis of functionally graded beams (FGBs) and frameworks containing FGBs is carried out by applying the dynamic stiffness method and deriving the elements of the dynamic stiffness matrix in explicit algebraic form. The usually adopted rule that the material properties of the FGB vary continuously through the thickness according to a power law forms the fundamental basis of the governing differential equations of motion in free vibration. The differential equations are solved in closed analytical form when the free vibratory motion is harmonic. The dynamic stiffness matrix is then formulated by relating the amplitudes of forces to those of the displacements at the two ends of the beam. Next, the explicit algebraic expressions for the dynamic stiffness elements are derived with the help of symbolic computation. Finally the Wittrick-Williams algorithm is applied as solution technique to solve the free vibration problems of FGBs with uniform cross-section, stepped FGBs and frameworks consisting of FGBs. Some numerical results are validated against published results, but in the absence of published results for frameworks containing FGBs, consistency checks on the reliability of results are performed. The paper closes with discussion of results and conclusions.

  2. The Relationship between Functional Status and Judgment/Problem Solving Among Individuals with Dementia

    PubMed Central

    Mayo, Ann M.; Wallhagen, Margaret; Cooper, Bruce A.; Mehta, Kala; Ross, Leslie; Miller, Bruce

    2012-01-01

    Objective To determine the relationship between functional status (independent activities of daily living) and judgment/problem solving and the extent to which select demographic characteristics such as dementia subtype and cognitive measures may moderate that relationship in older adult individuals with dementia. Methods The National Alzheimer’s Coordinating Center Universal Data Set was accessed for a study sample of 3,855 individuals diagnosed with dementia. Primary variables included functional status, judgment/problem solving, and cognition. Results Functional status was related to judgment/problem solving (r= 0.66; p< .0005). Functional status and cognition jointly predicted 56% of the variance in judgment/problem solving (R-squared = .56, p <.0005). As cognition decreases, the prediction of poorer judgment/problem solving by functional status became stronger. Conclusions Among individuals with a diagnosis of dementia, declining functional status as well as declining cognition should raise concerns about judgment/problem solving. PMID:22786576

  3. Human intelligence and brain networks

    PubMed Central

    Colom, Roberto; Karama, Sherif; Jung, Rex E.; Haier, Richard J.

    2010-01-01

    Intelligence can be defined as a general mental ability for reasoning, problem solving, and learning. Because of its general nature, intelligence integrates cognitive functions such as perception, attention, memory, language, or planning. On the basis of this definition, intelligence can be reliably measured by standardized tests with obtained scores predicting several broad social outcomes such as educational achievement, job performance, health, and longevity. A detailed understanding of the brain mechanisms underlying this general mental ability could provide significant individual and societal benefits. Structural and functional neuroimaging studies have generally supported a frontoparietal network relevant for intelligence. This same network has also been found to underlie cognitive functions related to perception, short-term memory storage, and language. The distributed nature of this network and its involvement in a wide range of cognitive functions fits well with the integrative nature of intelligence. A new key phase of research is beginning to investigate how functional networks relate to structural networks, with emphasis on how distributed brain areas communicate with each other. PMID:21319494

  4. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    NASA Astrophysics Data System (ADS)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  5. The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations.

    PubMed

    Ghanbari, Behzad

    2014-01-01

    We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.

  6. Qualification and Reliability for MEMS and IC Packages

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2004-01-01

    Advanced IC electronic packages are moving toward miniaturization from two key different approaches, front and back-end processes, each with their own challenges. Successful use of more of the back-end process front-end, e.g. microelectromechanical systems (MEMS) Wafer Level Package (WLP), enable reducing size and cost. Use of direct flip chip die is the most efficient approach if and when the issues of know good die and board/assembly are resolved. Wafer level package solve the issue of known good die by enabling package test, but it has its own limitation, e.g., the I/O limitation, additional cost, and reliability. From the back-end approach, system-in-a-package (SIAP/SIP) development is a response to an increasing demand for package and die integration of different functions into one unit to reduce size and cost and improve functionality. MEMS add another challenging dimension to electronic packaging since they include moving mechanical elements. Conventional qualification and reliability need to be modified and expanded in most cases in order to detect new unknown failures. This paper will review four standards that already released or being developed that specifically address the issues on qualification and reliability of assembled packages. Exposures to thermal cycles, monotonic bend test, mechanical shock and drop are covered in these specifications. Finally, mechanical and thermal cycle qualification data generated for MEMS accelerometer will be presented. The MEMS was an element of an inertial measurement unit (IMU) qualified for NASA Mars Exploration Rovers (MERs), Spirit and Opportunity that successfully is currently roaring the Martian surface

  7. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1991-01-01

    A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.

  8. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    PubMed

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  9. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  10. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm

    PubMed Central

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability. PMID:26880881

  11. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  12. CAS2D: FORTRAN program for nonrotating blade-to-blade, steady, potential transonic cascade flows

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    An exact, full-potential-equation (FPE) model for the steady, irrotational, homentropic and homoenergetic flow of a compressible, homocompositional, inviscid fluid through two dimensional planar cascades of airfoils was derived, together with its appropriate boundary conditions. A computer program, CAS2D, was developed that numerically solves an artificially time-dependent form of the actual FPE. The governing equation was discretized by using type-dependent, rotated finite differencing and the finite area technique. The flow field was discretized by providing a boundary-fitted, nonuniform computational mesh. The mesh was generated by using a sequence of conforming mapping, nonorthogonal coordinate stretching, and local, isoparametric, bilinear mapping functions. The discretized form of the FPE was solved iteratively by using successive line overrelaxation. The possible isentropic shocks were correctly captured by adding explicitly an artificial viscosity in a conservative form. In addition, a three-level consecutive, mesh refinement feature makes CAS2D a reliable and fast algorithm for the analysis of transonic, two dimensional cascade flows.

  13. Exact solutions to the time-fractional differential equations via local fractional derivatives

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan; Bekir, Ahmet

    2018-01-01

    This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.

  14. Complex Problem Solving: What It Is and What It Is Not

    PubMed Central

    Dörner, Dietrich; Funke, Joachim

    2017-01-01

    Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242

  15. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes

    PubMed Central

    Wade, Shari L.; Cassedy, Amy E.; Fulks, Lauren E.; Taylor, H. Gerry; Stancin, Terry; Kirkwood, Michael W.; Yeates, Keith O.; Kurowski, Brad G.

    2017-01-01

    Objective To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Design Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Setting Four children’s hospitals and 1 general hospital, with level 1 trauma units. Participants Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Main Outcome Measures Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. Results The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23–.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Conclusions Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. PMID:28389109

  16. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes.

    PubMed

    Wade, Shari L; Cassedy, Amy E; Fulks, Lauren E; Taylor, H Gerry; Stancin, Terry; Kirkwood, Michael W; Yeates, Keith O; Kurowski, Brad G

    2017-08-01

    To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Four children's hospitals and 1 general hospital, with level 1 trauma units. Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23-.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. A Survey on Data Quality for Dependable Monitoring in Wireless Sensor Networks

    PubMed Central

    Oliveira, Anabela

    2017-01-01

    Wireless sensor networks are being increasingly used in several application areas, particularly to collect data and monitor physical processes. Non-functional requirements, like reliability, security or availability, are often important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provide a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data are reliable or, more generically, that they have the necessary quality. In this survey, we look into the problem of ensuring the desired quality of data for dependable monitoring using WSNs. We take a dependability-oriented perspective, reviewing the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, we give particular attention to understanding which faults can affect sensors, how they can affect the quality of the information and how this quality can be improved and quantified. PMID:28869505

  18. Non-linear dynamic characteristics and optimal control of giant magnetostrictive film subjected to in-plane stochastic excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com

    2014-03-15

    The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less

  19. An Investigation of Taiwanese Early Adolescents' Self-Evaluations Concerning the Big 6 Information Problem-Solving Approach

    ERIC Educational Resources Information Center

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the…

  20. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  1. Method of Testing and Predicting Failures of Electronic Mechanical Systems

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, Frances A.

    1996-01-01

    A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.

  2. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    NASA Astrophysics Data System (ADS)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  3. Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis

    NASA Astrophysics Data System (ADS)

    Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John

    2015-11-01

    The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.

  4. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  5. Analogy as a strategy for supporting complex problem solving under uncertainty.

    PubMed

    Chan, Joel; Paletz, Susannah B F; Schunn, Christian D

    2012-11-01

    Complex problem solving in naturalistic environments is fraught with uncertainty, which has significant impacts on problem-solving behavior. Thus, theories of human problem solving should include accounts of the cognitive strategies people bring to bear to deal with uncertainty during problem solving. In this article, we present evidence that analogy is one such strategy. Using statistical analyses of the temporal dynamics between analogy and expressed uncertainty in the naturalistic problem-solving conversations among scientists on the Mars Rover Mission, we show that spikes in expressed uncertainty reliably predict analogy use (Study 1) and that expressed uncertainty reduces to baseline levels following analogy use (Study 2). In addition, in Study 3, we show with qualitative analyses that this relationship between uncertainty and analogy is not due to miscommunication-related uncertainty but, rather, is primarily concentrated on substantive problem-solving issues. Finally, we discuss a hypothesis about how analogy might serve as an uncertainty reduction strategy in naturalistic complex problem solving.

  6. Key management and encryption under the bounded storage model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less

  7. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  8. Functional classification of protein structures by local structure matching in graph representation.

    PubMed

    Mills, Caitlyn L; Garg, Rohan; Lee, Joslynn S; Tian, Liang; Suciu, Alexandru; Cooperman, Gene; Beuning, Penny J; Ondrechen, Mary Jo

    2018-03-31

    As a result of high-throughput protein structure initiatives, over 14,400 protein structures have been solved by structural genomics (SG) centers and participating research groups. While the totality of SG data represents a tremendous contribution to genomics and structural biology, reliable functional information for these proteins is generally lacking. Better functional predictions for SG proteins will add substantial value to the structural information already obtained. Our method described herein, Graph Representation of Active Sites for Prediction of Function (GRASP-Func), predicts quickly and accurately the biochemical function of proteins by representing residues at the predicted local active site as graphs rather than in Cartesian coordinates. We compare the GRASP-Func method to our previously reported method, structurally aligned local sites of activity (SALSA), using the ribulose phosphate binding barrel (RPBB), 6-hairpin glycosidase (6-HG), and Concanavalin A-like Lectins/Glucanase (CAL/G) superfamilies as test cases. In each of the superfamilies, SALSA and the much faster method GRASP-Func yield similar correct classification of previously characterized proteins, providing a validated benchmark for the new method. In addition, we analyzed SG proteins using our SALSA and GRASP-Func methods to predict function. Forty-one SG proteins in the RPBB superfamily, nine SG proteins in the 6-HG superfamily, and one SG protein in the CAL/G superfamily were successfully classified into one of the functional families in their respective superfamily by both methods. This improved, faster, validated computational method can yield more reliable predictions of function that can be used for a wide variety of applications by the community. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  9. A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi

    2016-09-01

    Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).

  10. Reliability of engineering methods of assessment the critical buckling load of steel beams

    NASA Astrophysics Data System (ADS)

    Rzeszut, Katarzyna; Folta, Wiktor; Garstecki, Andrzej

    2018-01-01

    In this paper the reliability assessment of buckling resistance of steel beam is presented. A number of parameters such as: the boundary conditions, the section height to width ratio, the thickness and the span are considered. The examples are solved using FEM procedures and formulas proposed in the literature and standards. In the case of the numerical models the following parameters are investigated: support conditions, mesh size, load conditions, steel grade. The numerical results are compared with approximate solutions calculated according to the standard formulas. It was observed that for high slenderness section the deformation of the cross-section had to be described by the following modes: longitudinal and transverse displacement, warping, rotation and distortion of the cross section shape. In this case we face interactive buckling problem. Unfortunately, neither the EN Standards nor the subject literature give close-form formulas to solve these problems. For this reason the reliability of the critical bending moment calculations is discussed.

  11. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.

  12. Is It Really Possible to Test All Educationally Significant Achievements with High Levels of Reliability?

    ERIC Educational Resources Information Center

    Davis, Andrew

    2015-01-01

    PISA claims that it can extend its reach from its current core subjects of Reading, Science, Maths and problem-solving. Yet given the requirement for high levels of reliability for PISA, especially in the light of its current high stakes character, proposed widening of its subject coverage cannot embrace some important aspects of the social and…

  13. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind farm system.

  14. Probabilistic finite elements for fracture and fatigue analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Lawrence, M.; Besterfield, G. H.

    1989-01-01

    The fusion of the probabilistic finite element method (PFEM) and reliability analysis for probabilistic fracture mechanics (PFM) is presented. A comprehensive method for determining the probability of fatigue failure for curved crack growth was developed. The criterion for failure or performance function is stated as: the fatigue life of a component must exceed the service life of the component; otherwise failure will occur. An enriched element that has the near-crack-tip singular strain field embedded in the element is used to formulate the equilibrium equation and solve for the stress intensity factors at the crack-tip. Performance and accuracy of the method is demonstrated on a classical mode 1 fatigue problem.

  15. High resolution X-ray CT for advanced electronics packaging

    NASA Astrophysics Data System (ADS)

    Oppermann, M.; Zerna, T.

    2017-02-01

    Advanced electronics packaging is a challenge for non-destructive Testing (NDT). More, smaller and mostly hidden interconnects dominate modern electronics components and systems. To solve the demands of customers to get products with a high functionality by low volume, weight and price (e.g. mobile phones, personal medical monitoring systems) often the designers use System-in-Package solutions (SiP). The non-destructive testing of such devices is a big challenge. So our paper will impart fundamentals and applications for non-destructive evaluation of inner structures of electronics packaging for quality assurance and reliability investigations with a focus on X-ray methods, especially on high resolution X-ray computed tomography (CT).

  16. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    NASA Astrophysics Data System (ADS)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non-linear Richards' equation for each time-step. The spatial distribution of long term recharge and baseflow obtained with a 30 year simulation of historic data using this parameterisation, corresponds well with the spatial patterns of groundwater recharge inferred from field measurements.

  17. Enhancing micro-seismic P-phase arrival picking: EMD-cosine function-based denoising with an application to the AIC picker

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun

    2018-03-01

    Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.

  18. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  19. New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods

    NASA Astrophysics Data System (ADS)

    S Saha, Ray

    2016-04-01

    In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.

  20. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  1. Mineral inversion for element capture spectroscopy logging based on optimization theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning

    2017-12-01

    Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.

  2. (The feeling of) meaning-as-information.

    PubMed

    Heintzelman, Samantha J; King, Laura A

    2014-05-01

    The desire for meaning is recognized as a central human motive. Yet, knowing that people want meaning does not explain its function. What adaptive problem does this experience solve? Drawing on the feelings-as-information hypothesis, we propose that the feeling of meaning provides information about the presence of reliable patterns and coherence in the environment, information that is not provided by affect. We review research demonstrating that manipulations of stimulus coherence influence subjective reports of meaning in life but not affect. We demonstrate that manipulations that foster an associative mindset enhance meaning. The meaning-as-information perspective embeds meaning in a network of foundational functions including associative learning, perception, cognition, and neural processing. This approach challenges assumptions about meaning, including its motivational appeal, the roles of expectancies and novelty in this experience, and the notion that meaning is inherently constructed. Implications for constructed meaning and existential meanings are discussed.

  3. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  4. RBind: computational network method to predict RNA binding sites.

    PubMed

    Wang, Kaili; Jian, Yiren; Wang, Huiwen; Zeng, Chen; Zhao, Yunjie

    2018-04-26

    Non-coding RNA molecules play essential roles by interacting with other molecules to perform various biological functions. However, it is difficult to determine RNA structures due to their flexibility. At present, the number of experimentally solved RNA-ligand and RNA-protein structures is still insufficient. Therefore, binding sites prediction of non-coding RNA is required to understand their functions. Current RNA binding site prediction algorithms produce many false positive nucleotides that are distance away from the binding sites. Here, we present a network approach, RBind, to predict the RNA binding sites. We benchmarked RBind in RNA-ligand and RNA-protein datasets. The average accuracy of 0.82 in RNA-ligand and 0.63 in RNA-protein testing showed that this network strategy has a reliable accuracy for binding sites prediction. The codes and datasets are available at https://zhaolab.com.cn/RBind. yjzhaowh@mail.ccnu.edu.cn. Supplementary data are available at Bioinformatics online.

  5. Multiple-scanning-probe tunneling microscope with nanoscale positional recognition function.

    PubMed

    Higuchi, Seiji; Kuramochi, Hiromi; Laurent, Olivier; Komatsubara, Takashi; Machida, Shinichi; Aono, Masakazu; Obori, Kenichi; Nakayama, Tomonobu

    2010-07-01

    Over the past decade, multiple-scanning-probe microscope systems with independently controlled probes have been developed for nanoscale electrical measurements. We developed a quadruple-scanning-probe tunneling microscope (QSPTM) that can determine and control the probe position through scanning-probe imaging. The difficulty of operating multiple probes with submicrometer precision drastically increases with the number of probes. To solve problems such as determining the relative positions of the probes and avoiding of contact between the probes, we adopted sample-scanning methods to obtain four images simultaneously and developed an original control system for QSPTM operation with a function of automatic positional recognition. These improvements make the QSPTM a more practical and useful instrument since four images can now be reliably produced, and consequently the positioning of the four probes becomes easier owing to the reduced chance of accidental contact between the probes.

  6. On some methods of discrete systems behaviour simulation

    NASA Astrophysics Data System (ADS)

    Sytnik, Alexander A.; Posohina, Natalia I.

    1998-07-01

    The project is solving one of the fundamental problems of mathematical cybernetics and discrete mathematics, the one connected with synthesis and analysis of managing systems, depending on the research of their functional opportunities and reliable behaviour. This work deals with the case of finite-state machine behaviour restoration when the structural redundancy is not available and the direct updating of current behaviour is impossible. The described below method, uses number theory to build a special model of finite-state machine, it is simulating the transition between the states of the finite-state machine using specially defined functions of exponential type with the help of several methods of number theory and algebra it is easy to determine, whether there is an opportunity to restore the behaviour (with the help of this method) in the given case or not and also derive the class of finite-state machines, admitting such restoration.

  7. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  8. Planning and problem-solving training for patients with schizophrenia: a randomized controlled trial

    PubMed Central

    2011-01-01

    Background The purpose of this study was to assess whether planning and problem-solving training is more effective in improving functional capacity in patients with schizophrenia than a training program addressing basic cognitive functions. Methods Eighty-nine patients with schizophrenia were randomly assigned either to a computer assisted training of planning and problem-solving or a training of basic cognition. Outcome variables included planning and problem-solving ability as well as functional capacity, which represents a proxy measure for functional outcome. Results Planning and problem-solving training improved one measure of planning and problem-solving more strongly than basic cognition training, while two other measures of planning did not show a differential effect. Participants in both groups improved over time in functional capacity. There was no differential effect of the interventions on functional capacity. Conclusion A differential effect of targeting specific cognitive functions on functional capacity could not be established. Small differences on cognitive outcome variables indicate a potential for differential effects. This will have to be addressed in further research including longer treatment programs and other settings. Trial registration ClinicalTrials.gov NCT00507988 PMID:21527028

  9. Optimal maintenance of a multi-unit system under dependencies

    NASA Astrophysics Data System (ADS)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.

  10. Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.

    2003-01-01

    The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.

  11. Differential contributions of executive and episodic memory functions to problem solving in younger and older adults.

    PubMed

    Vandermorris, Susan; Sheldon, Signy; Winocur, Gordon; Moscovitch, Morris

    2013-11-01

    The relationship of higher order problem solving to basic neuropsychological processes likely depends on the type of problems to be solved. Well-defined problems (e.g., completing a series of errands) may rely primarily on executive functions. Conversely, ill-defined problems (e.g., navigating socially awkward situations) may, in addition, rely on medial temporal lobe (MTL) mediated episodic memory processes. Healthy young (N = 18; M = 19; SD = 1.3) and old (N = 18; M = 73; SD = 5.0) adults completed a battery of neuropsychological tests of executive and episodic memory function, and experimental tests of problem solving. Correlation analyses and age group comparisons demonstrated differential contributions of executive and autobiographical episodic memory function to well-defined and ill-defined problem solving and evidence for an episodic simulation mechanism underlying ill-defined problem solving efficacy. Findings are consistent with the emerging idea that MTL-mediated episodic simulation processes support the effective solution of ill-defined problems, over and above the contribution of frontally mediated executive functions. Implications for the development of intervention strategies that target preservation of functional independence in older adults are discussed.

  12. A general strategy to solve the phase problem in RNA crystallography

    PubMed Central

    Keel, Amanda Y.; Rambo, Robert P.; Batey, Robert T.; Kieft, Jeffrey S.

    2007-01-01

    SUMMARY X-ray crystallography of biologically important RNA molecules has been hampered by technical challenges, including finding a heavy-atom derivative to obtain high-quality experimental phase information. Existing techniques have drawbacks, severely limiting the rate at which important new structures are solved. To address this need, we have developed a reliable means to localize heavy atoms specifically to virtually any RNA. By solving the crystal structures of thirteen variants of the G·U wobble pair cation binding motif we have identified an optimal version that when inserted into an RNA helix introduces a high-occupancy cation binding site suitable for phasing. This “directed soaking” strategy can be integrated fully into existing RNA and crystallography methods, potentially increasing the rate at which important structures are solved and facilitating routine solving of structures using Cu-Kα radiation. The success of this method has been proven in that it has already been used to solve several novel crystal structures. PMID:17637337

  13. Laboratory Scale Electrodeposition. Practice and Applications.

    ERIC Educational Resources Information Center

    Bruno, Thomas J.

    1986-01-01

    Discusses some aspects of electrodeposition and electroplating. Emphasizes the materials, techniques, and safety precautions necessary to make electrodeposition work reliably in the chemistry laboratory. Describes some problem-solving applications of this process. (TW)

  14. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  15. SABRE: a bio-inspired fault-tolerant electronic architecture.

    PubMed

    Bremner, P; Liu, Y; Samie, M; Dragffy, G; Pipe, A G; Tempesti, G; Timmis, J; Tyrrell, A M

    2013-03-01

    As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance.

  16. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  17. Construct Validation of the Physics Metacognition Inventory

    NASA Astrophysics Data System (ADS)

    Taasoobshirazi, Gita; Farley, John

    2013-02-01

    The 24-item Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. Items were classified into eight subcomponents subsumed under two broader components: knowledge of cognition and regulation of cognition. The students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. An exploratory factor analysis provided evidence of construct validity, revealing six components of students' metacognition when solving physics problems including: knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. Although women and men differed on the components, they had equivalent overall metacognition for problem solving. The implications of these findings for future research are discussed.

  18. Cognitive functioning and social problem-solving skills in schizophrenia.

    PubMed

    Hatashita-Wong, Michi; Smith, Thomas E; Silverstein, Steven M; Hull, James W; Willson, Deborah F

    2002-05-01

    This study examined the relationships between symptoms, cognitive functioning, and social skill deficits in schizophrenia. Few studies have incorporated measures of cognitive functioning and symptoms in predictive models for social problem solving. For our study, 44 participants were recruited from consecutive outpatient admissions. Neuropsychological tests were given to assess cognitive function, and social problem solving was assessed using structured vignettes designed to evoke the participant's ability to generate, evaluate, and apply solutions to social problems. A sequential model-fitting method of analysis was used to incorporate social problem solving, symptom presentation, and cognitive impairment into linear regression models. Predictor variables were drawn from demographic, cognitive, and symptom domains. Because this method of analysis was exploratory and not intended as hierarchical modelling, no a priori hypotheses were proposed. Participants with higher scores on tests of cognitive flexibility were better able to generate accurate, appropriate, and relevant responses to the social problem-solving vignettes. The results suggest that cognitive flexibility is a potentially important mediating factor in social problem-solving competence. While other factors are related to social problem-solving skill, this study supports the importance of cognition and understanding how it relates to the complex and multifaceted nature of social functioning.

  19. Enhancing insight in scientific problem solving by highlighting the functional features of prototypes: an fMRI study.

    PubMed

    Hao, Xin; Cui, Shuai; Li, Wenfu; Yang, Wenjing; Qiu, Jiang; Zhang, Qinglin

    2013-10-09

    Insight can be the first step toward creating a groundbreaking product. As evident in anecdotes and major inventions in history, heuristic events (heuristic prototypes) prompted inventors to acquire insight when solving problems. Bionic imitation in scientific innovation is an example of this kind of problem solving. In particular, heuristic prototypes (e.g., the lotus effect; the very high water repellence exhibited by lotus leaves) help solve insight problems (e.g., non-stick surfaces). We speculated that the biological functional feature of prototypes is a critical factor in inducing insightful scientific problem solving. In this functional magnetic resonance imaging (fMRI) study, we selected scientific innovation problems and utilized "learning prototypes-solving problems" two-phase paradigm to test the supposition. We also explored its neural mechanisms. Functional MRI data showed that the activation of the middle temporal gyrus (MTG, BA 37) and the middle occipital gyrus (MOG, BA 19) were associated with the highlighted functional feature condition. fMRI data also indicated that the MTG (BA 37) could be responsible for the semantic processing of functional features and for the formation of novel associations based on related functions. In addition, the MOG (BA 19) could be involved in the visual imagery of formation and application of function association between the heuristic prototype and problem. Our findings suggest that both semantic processing and visual imagery could be crucial components underlying scientific problem solving. © 2013 Elsevier B.V. All rights reserved.

  20. HiRel - Reliability/availability integrated workstation tool

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Dugan, Joanne B.

    1992-01-01

    The HiRel software tool is described and demonstrated by application to the mission avionics subsystem of the Advanced System Integration Demonstrations (ASID) system that utilizes the PAVE PILLAR approach. HiRel marks another accomplishment toward the goal of producing a totally integrated computer-aided design (CAD) workstation design capability. Since a reliability engineer generally represents a reliability model graphically before it can be solved, the use of a graphical input description language increases productivity and decreases the incidence of error. The graphical postprocessor module HARPO makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes. The addition of several powerful HARP modeling engines provides the user with a reliability/availability modeling capability for a wide range of system applications all integrated under a common interactive graphical input-output capability.

  1. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  2. Enterprise Management Network Architecture Distributed Knowledge Base Support

    DTIC Science & Technology

    1990-11-01

    Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse

  3. [Construction and application of an onboard absorption analyzer device for CDOM].

    PubMed

    Lin, Jun-Fang; Sun, Zhao-Hua; Cao, Wen-Xi; Hu, Shui-Bo; Xu, Zhan-Tang

    2013-04-01

    Colored dissolved organic matter (CDOM) plays an important role in marine ecosystems. In order to solve the current problems in measurement of CDOM absorption, an automated onboard analyzer based on liquid core waveguides (Teflon AF LWCC/LCW) was constructed. This analyzer has remarkable characteristics including adjusted optical pathlength, wide measurement range, and high sensitivity. The model of filtration and injection can implement the function of automated filtration, sample injection, and LWCC cleaning. The LabVIEW software platform can efficiently control the running state of the analyzer and acquire real time data including light absorption spectra, GPS data, and CTW data. By the comparison experiments and shipboard measurements, it was proved that the analyzer was reliable and robust.

  4. Distributed computation: the new wave of synthetic biology devices.

    PubMed

    Macía, Javier; Posas, Francesc; Solé, Ricard V

    2012-06-01

    Synthetic biology (SB) offers a unique opportunity for designing complex molecular circuits able to perform predefined functions. But the goal of achieving a flexible toolbox of reusable molecular components has been shown to be limited due to circuit unpredictability, incompatible parts or random fluctuations. Many of these problems arise from the challenges posed by engineering the molecular circuitry: multiple wires are usually difficult to implement reliably within one cell and the resulting systems cannot be reused in other modules. These problems are solved by means of a nonstandard approach to single cell devices, using cell consortia and allowing the output signal to be distributed among different cell types, which can be combined in multiple, reusable and scalable ways. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Acquiring an understanding of design: evidence from children's insight problem solving.

    PubMed

    Defeyter, Margaret Anne; German, Tim P

    2003-09-01

    The human ability to make tools and use them to solve problems may not be zoologically unique, but it is certainly extraordinary. Yet little is known about the conceptual machinery that makes humans so competent at making and using tools. Do adults and children have concepts specialized for understanding human-made artifacts? If so, are these concepts deployed in attempts to solve novel problems? Here we present new data, derived from problem-solving experiments, which support the following. (i) The structure of the child's concept of artifact function changes profoundly between ages 5 and 7. At age 5, the child's conceptual machinery defines the function of an artifact as any goal a user might have; by age 7, its function is defined by the artifact's typical or intended use. (ii) This conceptual shift has a striking effect on problem-solving performance, i.e. the child's concept of artifact function appears to be deployed in problem solving. (iii) This effect on problem solving is not caused by differences in the amount of knowledge that children have about the typical use of a particular tool; it is mediated by the structure of the child's artifact concept (which organizes and deploys the child's knowledge). In two studies, children between 5 and 7 years of age were matched for their knowledge of what a particular artifact "is for", and then given a problem that can only be solved if that tool is used for an atypical purpose. All children performed well in a baseline condition. But when they were primed by a demonstration of the artifact's typical function, 5-year-old children solved the problem much faster than 6-7-year-old children. Because all children knew what the tools were for, differences in knowledge alone cannot explain the results. We argue that the older children were slower to solve the problem when the typical function was primed because (i) their artifact concept plays a role in problem solving, and (ii) intended purpose is central to their concept of artifact function, but not to that of the younger children.

  6. Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram

    2017-03-01

    The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.

  7. Reliability approach to rotating-component design. [fatigue life and stress concentration

    NASA Technical Reports Server (NTRS)

    Kececioglu, D. B.; Lalli, V. R.

    1975-01-01

    A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.

  8. Stochastic modelling of the hydrologic operation of rainwater harvesting systems

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Guo, Yiping

    2018-07-01

    Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.

  9. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  10. An investigation of Taiwanese early adolescents' self-evaluations concerning the Big 6 information problem-solving approach.

    PubMed

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the adolescents' perceptions about the Big 6 information problem-solving approach. In addition, the adolescents had quite different responses toward different subscales of the approach. Moreover, females tended to have higher quality information-searching skills than their male counterparts. The adolescents of different grades also displayed varying views toward the approach. Other results are also provided.

  11. Cross-syndrome comparison of real-world executive functioning and problem solving using a new problem-solving questionnaire.

    PubMed

    Camp, Joanne S; Karmiloff-Smith, Annette; Thomas, Michael S C; Farran, Emily K

    2016-12-01

    Individuals with neurodevelopmental disorders like Williams syndrome and Down syndrome exhibit executive function impairments on experimental tasks (Lanfranchi, Jerman, Dal Pont, Alberti, & Vianello, 2010; Menghini, Addona, Costanzo, & Vicari, 2010), but the way that they use executive functioning for problem solving in everyday life has not hitherto been explored. The study aim is to understand cross-syndrome characteristics of everyday executive functioning and problem solving. Parents/carers of individuals with Williams syndrome (n=47) or Down syndrome (n=31) of a similar chronological age (m=17 years 4 months and 18 years respectively) as well as those of a group of younger typically developing children (n=34; m=8years 3 months) completed two questionnaires: the Behavior Rating Inventory of Executive Function (BRIEF; Gioia, Isquith, Guy, & Kenworthy, 2000) and a novel Problem-Solving Questionnaire. The rated likelihood of reaching a solution in a problem solving situation was lower for both syndromic groups than the typical group, and lower still for the Williams syndrome group than the Down syndrome group. The proportion of group members meeting the criterion for clinical significance on the BRIEF was also highest for the Williams syndrome group. While changing response, avoiding losing focus and maintaining perseverance were important for problem-solving success in all groups, asking for help and avoiding becoming emotional were also important for the Down syndrome and Williams syndrome groups respectively. Keeping possessions in order was a relative strength amongst BRIEF scales for the Down syndrome group. Results suggest that individuals with Down syndrome tend to use compensatory strategies for problem solving (asking for help and potentially, keeping items well ordered), while for individuals with Williams syndrome, emotional reactions disrupt their problem-solving skills. This paper highlights the importance of identifying syndrome-specific problem-solving strengths and difficulties to improve effective functioning in everyday life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Smart Operations in Distributed Energy Resources System

    NASA Astrophysics Data System (ADS)

    Wei, Li; Jie, Shu; Zhang-XianYong; Qing, Zhou

    Smart grid capabilities are being proposed to help solve the challenges concerning system operations due to that the trade-offs between energy and environmental needs will be constantly negotiated while a reliable supply of electricity needs even greater assurance in case of that threats of disruption have risen. This paper mainly explores models for distributed energy resources system (DG, storage, and load),and also reviews the evolving nature of electricity markets to deal with this complexity and a change of emphasis on signals from these markets to affect power system control. Smart grid capabilities will also impact reliable operations, while cyber security issues must be solved as a culture change that influences all system design, implementation, and maintenance. Lastly, the paper explores significant questions for further research and the need for a simulation environment that supports such investigation and informs deployments to mitigate operational issues as they arise.

  13. Resources in Technology: Problem-Solving.

    ERIC Educational Resources Information Center

    Technology Teacher, 1986

    1986-01-01

    This instructional module examines a key function of science and technology: problem solving. It studies the meaning of problem solving, looks at techniques for problem solving, examines case studies that exemplify the problem-solving approach, presents problems for the reader to solve, and provides a student self-quiz. (Author/CT)

  14. Efficient algorithms for analyzing the singularly perturbed boundary value problems of fractional order

    NASA Astrophysics Data System (ADS)

    Sayevand, K.; Pichaghchi, K.

    2018-04-01

    In this paper, we were concerned with the description of the singularly perturbed boundary value problems in the scope of fractional calculus. We should mention that, one of the main methods used to solve these problems in classical calculus is the so-called matched asymptotic expansion method. However we shall note that, this was not achievable via the existing classical definitions of fractional derivative, because they do not obey the chain rule which one of the key elements of the matched asymptotic expansion method. In order to accommodate this method to fractional derivative, we employ a relatively new derivative so-called the local fractional derivative. Using the properties of local fractional derivative, we extend the matched asymptotic expansion method to the scope of fractional calculus and introduce a reliable new algorithm to develop approximate solutions of the singularly perturbed boundary value problems of fractional order. In the new method, the original problem is partitioned into inner and outer solution equations. The reduced equation is solved with suitable boundary conditions which provide the terminal boundary conditions for the boundary layer correction. The inner solution problem is next solved as a solvable boundary value problem. The width of the boundary layer is approximated using appropriate resemblance function. Some theoretical results are established and proved. Some illustrating examples are solved and the results are compared with those of matched asymptotic expansion method and homotopy analysis method to demonstrate the accuracy and efficiency of the method. It can be observed that, the proposed method approximates the exact solution very well not only in the boundary layer, but also away from the layer.

  15. A problem-solving task specialized for functional neuroimaging: validation of the Scarborough adaptation of the Tower of London (S-TOL) using near-infrared spectroscopy

    PubMed Central

    Ruocco, Anthony C.; Rodrigo, Achala H.; Lam, Jaeger; Di Domenico, Stefano I.; Graves, Bryanna; Ayaz, Hasan

    2014-01-01

    Problem-solving is an executive function subserved by a network of neural structures of which the dorsolateral prefrontal cortex (DLPFC) is central. Whereas several studies have evaluated the role of the DLPFC in problem-solving, few standardized tasks have been developed specifically for use with functional neuroimaging. The current study adapted a measure with established validity for the assessment of problem-solving abilities to design a test more suitable for functional neuroimaging protocols. The Scarborough adaptation of the Tower of London (S-TOL) was administered to 38 healthy adults while hemodynamic oxygenation of the PFC was measured using 16-channel continuous-wave functional near-infrared spectroscopy (fNIRS). Compared to a baseline condition, problems that required two or three steps to achieve a goal configuration were associated with higher activation in the left DLPFC and deactivation in the medial PFC. Individuals scoring higher in trait deliberation showed consistently higher activation in the left DLPFC regardless of task difficulty, whereas individuals lower in this trait displayed less activation when solving simple problems. Based on these results, the S-TOL may serve as a standardized task to evaluate problem-solving abilities in functional neuroimaging studies. PMID:24734017

  16. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  17. Physical activity problem-solving inventory for adolescents: development and initial validation.

    PubMed

    Thompson, Debbe; Bhatt, Riddhi; Watson, Kathy

    2013-08-01

    Youth encounter physical activity barriers, often called problems. The purpose of problem solving is to generate solutions to overcome the barriers. Enhancing problem-solving ability may enable youth to be more physically active. Therefore, a method for reliably assessing physical activity problem-solving ability is needed. The purpose of this research was to report the development and initial validation of the physical activity problem-solving inventory for adolescents (PAPSIA). Qualitative and quantitative procedures were used. The social problem-solving inventory for adolescents guided the development of the PAPSIA scale. Youth (14- to 17-year-olds) were recruited using standard procedures, such as distributing flyers in the community and to organizations likely to be attended by adolescents. Cognitive interviews were conducted in person. Adolescents completed pen and paper versions of the questionnaire and/or scales assessing social desirability, self-reported physical activity, and physical activity self-efficacy. An expert panel review, cognitive interviews, and a pilot study (n = 129) established content validity. Construct, concurrent, and predictive validity were also established (n = 520 youth). PAPSIA is a promising measure for assessing youth physical activity problem-solving ability. Future research will assess its validity with objectively measured physical activity.

  18. Assessing student written problem solutions: A problem-solving rubric with application to introductory physics

    NASA Astrophysics Data System (ADS)

    Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie

    2016-06-01

    Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic classroom work. It is also useful if such tools can be employed by instructors to guide their pedagogy. We describe the design, development, and testing of a simple rubric to assess written solutions to problems given in undergraduate introductory physics courses. In particular, we present evidence for the validity, reliability, and utility of the instrument. The rubric identifies five general problem-solving processes and defines the criteria to attain a score in each: organizing problem information into a Useful Description, selecting appropriate principles (Physics Approach), applying those principles to the specific conditions in the problem (Specific Application of Physics), using Mathematical Procedures appropriately, and displaying evidence of an organized reasoning pattern (Logical Progression).

  19. The politics of insight

    PubMed Central

    Salvi, Carola; Cristofori, Irene; Grafman, Jordan; Beeman, Mark

    2016-01-01

    Previous studies showed that liberals and conservatives differ in cognitive style. Liberals are more flexible, and tolerant of complexity and novelty, whereas conservatives are more rigid, are more resistant to change, and prefer clear answers. We administered a set of compound remote associate problems, a task extensively used to differentiate problem-solving styles (via insight or analysis). Using this task, several researches have proven that self-reports, which differentiate between insight and analytic problem-solving, are reliable and are associated with two different neural circuits. In our research we found that participants self-identifying with distinct political orientations demonstrated differences in problem-solving strategy. Liberals solved significantly more problems via insight instead of in a step-by-step analytic fashion. Our findings extend previous observations that self-identified political orientations reflect differences in cognitive styles. More specifically, we show that type of political orientation is associated with problem-solving strategy. The data converge with previous neurobehavioural and cognitive studies indicating a link between cognitive style and the psychological mechanisms that mediate political beliefs. PMID:26810954

  20. The politics of insight.

    PubMed

    Salvi, Carola; Cristofori, Irene; Grafman, Jordan; Beeman, Mark

    2016-01-01

    Previous studies showed that liberals and conservatives differ in cognitive style. Liberals are more flexible, and tolerant of complexity and novelty, whereas conservatives are more rigid, are more resistant to change, and prefer clear answers. We administered a set of compound remote associate problems, a task extensively used to differentiate problem-solving styles (via insight or analysis). Using this task, several researches have proven that self-reports, which differentiate between insight and analytic problem-solving, are reliable and are associated with two different neural circuits. In our research we found that participants self-identifying with distinct political orientations demonstrated differences in problem-solving strategy. Liberals solved significantly more problems via insight instead of in a step-by-step analytic fashion. Our findings extend previous observations that self-identified political orientations reflect differences in cognitive styles. More specifically, we show that type of political orientation is associated with problem-solving strategy. The data converge with previous neurobehavioural and cognitive studies indicating a link between cognitive style and the psychological mechanisms that mediate political beliefs.

  1. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  2. Apparatus for localizing disturbances in pressurized water reactors (PWR)

    DOEpatents

    Sykora, Dalibor

    1989-01-01

    The invention according to CS-PS 177386, entitled ''Apparatus for increasing the efficiency and passivity of the functioning of a bubbling-vacuum system for localizing disturbances in nuclear power plants with a pressurized water reactor'', concerns an important area of nuclear power engineering that is being developed in the RGW member countries. The invention solves the problems of increasing the reliability and intensification during the operation of the above very important system for guaranteeing the safety of the standard nuclear power plants of Soviet design. The essence of the invention consists in the installation of a simple passively operating supplementary apparatus. Consequently, the following can be observed in the system: first an improvement and simultaneous increase in the reliability of its function during the critical transition period, which follows the filling of the second space with air from the first space; secondly, elimination of the hitherto unavoidable initiating role of the active sprinkler-condensation device present; thirdly, a more effective performance and subjection of the elements to disintegration of the water flowing from the bubbling condenser into the first space; and fourthly, an enhanced utilization of the heat-conducting ability of the water reservoir of the bubbling condenser. Representatives of the supplementary apparatus are autonomous and local secondary systems of the sprinkler-sprayer without an insert, which spray the water under the effect of gravity. 1 fig.

  3. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guruswamy, L.D.; Palmer, G.W.R. Sir; Weston, B.H.

    A litany of dismal happenings - global warming, ozone layer depletion, desertification, destruction of biodiversity, acid rain, and nuclear and water accidents - are but some of the subjects covered by this book, a problem-solving casebook authored by three educators. This new book makes the obvious but important point, that environmental issues are not limited by national boundaries. The book is divided into three parts. The first three chapters of part I discuss the basic principals of traditional international law without any reference to environmental issues. Part II, comprised of seven chapters, deals with hypothetical problems that affect various aspectsmore » of the environment vis-a-vis the norms, institutions, and procedures through which the international legal system operates. The book concludes with two chapters dealing with future environmental concerns. The book focuses on issue-spotting, problem-solving, and synthesis over the assimilation and comprehension of raw, disembodied knowledge. The book helps to manage our common future on this planet, for which we will need a new global regime based essentially on the extension into international life of the rule of law, together with reliable mechanisms for accountability and enforcement that provide the basis for the effective functioning of national societies.« less

  5. Measurement of the True Dynamic and Static Pressures in Flight

    NASA Technical Reports Server (NTRS)

    Kiel, Georg

    1939-01-01

    In this report, two reliable methods are presented, with the aid of which the undisturbed flight dynamic pressure and the true static pressure may be determined without error. These problems were solved chiefly through practical flight tests.

  6. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  7. Towards Engineering Biological Systems in a Broader Context.

    PubMed

    Venturelli, Ophelia S; Egbert, Robert G; Arkin, Adam P

    2016-02-27

    Significant advances have been made in synthetic biology to program information processing capabilities in cells. While these designs can function predictably in controlled laboratory environments, the reliability of these devices in complex, temporally changing environments has not yet been characterized. As human society faces global challenges in agriculture, human health and energy, synthetic biology should develop predictive design principles for biological systems operating in complex environments. Natural biological systems have evolved mechanisms to overcome innumerable and diverse environmental challenges. Evolutionary design rules should be extracted and adapted to engineer stable and predictable ecological function. We highlight examples of natural biological responses spanning the cellular, population and microbial community levels that show promise in synthetic biology contexts. We argue that synthetic circuits embedded in host organisms or designed ecologies informed by suitable measurement of biotic and abiotic environmental parameters could be used as engineering substrates to achieve target functions in complex environments. Successful implementation of these methods will broaden the context in which synthetic biological systems can be applied to solve important problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  9. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  10. The Role of the Updating Function in Solving Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Mori, Kanetaka; Okamoto, Masahiko

    2017-01-01

    We investigated how the updating function supports the integration process in solving arithmetic word problems. In Experiment 1, we measured reading time, that is, translation and integration times, when undergraduate and graduate students (n = 78) were asked to solve 2 types of problems: those containing only necessary information and those…

  11. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  12. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    NASA Astrophysics Data System (ADS)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  13. Visual modeling in an analysis of multidimensional data

    NASA Astrophysics Data System (ADS)

    Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.

    2018-01-01

    The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.

  14. A study of fuzzy logic ensemble system performance on face recognition problem

    NASA Astrophysics Data System (ADS)

    Polyakova, A.; Lipinskiy, L.

    2017-02-01

    Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.

  15. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  16. The Next Generation of Interoperability Agents in Healthcare

    PubMed Central

    Cardoso, Luciana; Marins, Fernando; Portela, Filipe; Santos, Manuel ; Abelha, António; Machado, José

    2014-01-01

    Interoperability in health information systems is increasingly a requirement rather than an option. Standards and technologies, such as multi-agent systems, have proven to be powerful tools in interoperability issues. In the last few years, the authors have worked on developing the Agency for Integration, Diffusion and Archive of Medical Information (AIDA), which is an intelligent, agent-based platform to ensure interoperability in healthcare units. It is increasingly important to ensure the high availability and reliability of systems. The functions provided by the systems that treat interoperability cannot fail. This paper shows the importance of monitoring and controlling intelligent agents as a tool to anticipate problems in health information systems. The interaction between humans and agents through an interface that allows the user to create new agents easily and to monitor their activities in real time is also an important feature, as health systems evolve by adopting more features and solving new problems. A module was installed in Centro Hospitalar do Porto, increasing the functionality and the overall usability of AIDA. PMID:24840351

  17. Experiment and density functional theory analyses of GdTaO4 single crystal

    NASA Astrophysics Data System (ADS)

    Ding, Shoujun; Kinross, Ashlie; Wang, Xiaofei; Yang, Huajun; Zhang, Qingli; Liu, Wenpeng; Sun, Dunlu

    2018-05-01

    GdTaO4 is a type of excellent materials that can be used as scintillation, laser matrix as well as self-activated phosphor has generated significant interest. Whereas its band structure, electronic structure and optical properties are still need elucidation. To solve this intriguing problem, high-quality GdTaO4 single crystal (M-type) was grown successfully using Czochralski method. Its structure as well as optical properties was determined in experiment. Moreover, a systematic theoretical calculation based on the density function theory methods were performed on M-type and M‧-type GdTaO4 and their band structure, density of state as well as optical properties were obtained. Combine with the performed experiment results, the calculated results were proved with high reliability. Hence, the calculated results obtained in this work could provide a deep understanding of GdTaO4 material, which also useful for the further investigation on GdTaO4 material.

  18. Improved ATLAS HammerCloud Monitoring for Local Site Administration

    NASA Astrophysics Data System (ADS)

    Böhler, M.; Elmsheuser, J.; Hönig, F.; Legger, F.; Mancinelli, V.; Sciacca, G.

    2015-12-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimizations of the HammerCloud web interface and gives an overview of typical use cases.

  19. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry.

    PubMed

    Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A

    2017-03-21

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  20. Analytic functions for potential energy curves, dipole moments, and transition dipole moments of LiRb molecule.

    PubMed

    You, Yang; Yang, Chuan-Lu; Wang, Mei-Shan; Ma, Xiao-Guang; Liu, Wen-Wang; Wang, Li-Zhi

    2016-01-15

    The analytic potential energy functions (APEFs) of the X(1)Σ(+), 2(1)Σ(+), a(3)Σ(+), and 2(3)Σ(+) states of the LiRb molecule are obtained using Morse long-range potential energy function with damping function and nonlinear least-squares method. These calculations were based on the potential energy curves (PECs) calculated using the multi-reference configuration interaction (MRCI) method. The reliability of the APEFs is confirmed using the curves of their first and second derivatives. By using the obtained APEFs, the rotational and vibrational energy levels of the states are determined by solving the Schrödinger equation of nuclear movement. The spectroscopic parameters, which are deduced using Dunham expansion, and the obtained rotational and vibrational levels are compared with the reported theoretical and experimental values. The correlation effect of the electrons of the inner shell remarkably improves the results compared with the experimental spectroscopic parameters. For the first time, the APEFs for the dipole moments and transition dipole moments of the states have been determined based on the curves obtained from the MRCI calculations. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Numerical Optimization Algorithms and Software for Systems Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  2. Social problem solving in chronic pain: An integrative model of coping predicts mental health in chronic pain patients.

    PubMed

    Suso-Ribera, Carlos; Camacho-Guerrero, Laura; McCracken, Lance M; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2016-06-01

    Despite several models of coping have been proposed in chronic pain, research is not integrative and has not yet identified a reliable set of beneficial coping strategies. We intend to offer a comprehensive view of coping using the social problem-solving model. Participants were 369 chronic pain patients (63.78% women; mean age 58.89 years; standard deviation = 15.12 years). Correlation analyses and the structural equation model for mental health revealed potentially beneficial and harmful problem-solving components. This integrative perspective on general coping could be used to promote changes in the way patients deal with stressful conditions other than pain. © The Author(s) 2014.

  3. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  4. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  5. Self-Monitoring Checklists for Inquiry Problem-Solving: Functional Problem-Solving Methods for Students with Intellectual Disability

    ERIC Educational Resources Information Center

    Miller, Bridget; Taber-Doughty, Teresa

    2014-01-01

    Three students with mild to moderate intellectual and multiple disability, enrolled in a self-contained functional curriculum class were taught to use a self-monitoring checklist and science notebook to increase independence in inquiry problem-solving skills. Using a single-subject multiple-probe design, all students acquired inquiry…

  6. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  7. Associations between conceptual reasoning, problem solving, and adaptive ability in high-functioning autism.

    PubMed

    Williams, Diane L; Mazefsky, Carla A; Walker, Jon D; Minshew, Nancy J; Goldstein, Gerald

    2014-11-01

    Abstract thinking is generally highly correlated with problem-solving ability which is predictive of better adaptive functioning. Measures of conceptual reasoning, an ecologically-valid laboratory measure of problem-solving, and a report measure of adaptive functioning in the natural environment, were administered to children and adults with and without autism. The individuals with autism had weaker conceptual reasoning ability than individuals with typical development of similar age and cognitive ability. For the autism group, their flexible thinking scores were significantly correlated with laboratory measures of strategy formation and rule shifting and with reported overall adaptive behavior but not socialization scores. Therefore, in autism, flexibility of thought is potentially more important for adaptive functioning in the natural environment than conceptual reasoning or problem-solving.

  8. Modeling of the spatial state of the ionosphere using regular definitions of the VTEC identifier at the network of continuously operating GNSS stations of Ukraine

    NASA Astrophysics Data System (ADS)

    Yankiv-Vitkovska, Liubov; Dzhuman, Bogdan

    2017-04-01

    Due to the wide application of global navigation satellite systems (GNSS), the development of the modern GNSS infrastructure moved the monitoring of the Earth's ionosphere to a new methodological and technological level. The peculiarity of such monitoring is that it allows conducting different experimental studies including the study of the ionosphere directly while using the existing networks of reference GNSS stations intended for solving other problems. The application of the modern GNSS infrastructure is another innovative step in the ionospheric studies as such networks allow to conduct measurements continuously over time in any place. This is used during the monitoring of the ionosphere and allows studying the global and regional phenomena in the ionosphere in real time. Application of a network of continuously operating reference stations to determine numerical characteristics of the Earth's ionosphere allows creating an effective technology to monitor the ionosphere regionally. This technology is intended to solve both scientific problems concerning the space weather, and practical tasks such as providing coordinates of the geodetic level accuracy. For continuously operating reference GNSS stations, the results of the determined ionization identifier TEC (Total Electron Content). On the one hand, this data reflects the state of the ionosphere during the observation; on the other hand, it is a substantial tool for accuracy improvement and reliable determination of coordinates of the observation place. Thus, it was decided to solve a problem of restoring the spatial position of the ionospheric state or its ionization field according to the regular definitions of the TEC identifier, i.e. VTEC (Vertical TEC). The description below shows one of the possible solutions that is based on the spherical cap harmonic analysis method for modeling VTEC parameter. This method involves transformation of the initial data to a spherical cap and construction of model using associated Legendre functions of integer order but not necessarily of integer degree. Such functions form two orthogonal systems of functions on the spherical cap. The method was tested for network of permanent stations ZAKPOS.

  9. Ability to solve riddles in patients with speech and language impairments after stroke.

    PubMed

    Savić, Goran

    2016-01-01

    Successful riddle solving requires recognition of the meaning of words, attention, concentration, memory, connectivity and analysis of riddle content, and sufficiently developed associative thinking. The aim of the study was to determine the ability to solve riddles in stroke patients who do or do not have speech and language disorders (SLDs), to determine the presence of SLDs in relation to the lesion localization, as well as to define the relationship between riddle-solving and functional impairment of a body side. The sample consisted of 88 patients. The data used included age, sex, educational level, time of stroke onset, presence of an SLD, lesion localization, and functional damage of the body side. The patients were presented with a task of solving 10 riddles. A significant SLD was present in 38.60% of the patients. Brain lesions were found distributed at 46 different brain sites. Patients with different lesion localization had different success in solving riddles. Patients with perisylvian cortex brain lesions, or patients with Wernicke and global aphasia, had the poorest results. The group with SLDs had an average success of solved riddles of 26.76% (p = 0.000). The group with right-sided functional impairments had average success of 37.14%, and the group with functional impairments of the left side of the body 56.88% (p = 0.002). Most patients with SLDs had a low ability of solving riddles. Most of the patients with left brain lesions and perisylvian cortex damage demonstrated lower ability in solving riddles in relation to patients with right hemisphere lesions.

  10. The student resilience survey: psychometric validation and associations with mental health.

    PubMed

    Lereya, Suzet Tanya; Humphrey, Neil; Patalay, Praveetha; Wolpert, Miranda; Böhnke, Jan R; Macdougall, Amy; Deighton, Jessica

    2016-01-01

    Policies, designed to promote resilience, and research, to understand the determinants and correlates of resilience, require reliable and valid measures to ensure data quality. The student resilience survey (SRS) covers a range of external supports and internal characteristics which can potentially be viewed as protective factors and can be crucial in exploring the mechanisms between protective factors and risk factors, and to design intervention and prevention strategies. This study examines the validity of the SRS. 7663 children (aged 11-15 years) from 12 local areas across England completed the SRS, and questionnaires regarding mental and physical health. Psychometric properties of 10 subscales of the SRS (family connection, school connection, community connection, participation in home and school life, participation in community life, peer support, self-esteem, empathy, problem solving, and goals and aspirations) were investigated by confirmatory factor analysis (CFA), differential item functioning (DIF), differential test functioning (DTF), Cronbach's α and McDonald's ω . The associations between the SRS scales, mental and physical health outcomes were examined. The results supported the construct validity of the 10 factors of the scale and provided evidence for acceptable reliability of all the subscales. Our DIF analysis indicated differences between boys and girls, between primary and secondary school children, between children with or without special educational needs (SEN) and between children with or without English as an additional language (EAL) in terms of how they answered the peer support subscale of the SRS. Analyses did not indicate any DIF based on free school meals (FSM) eligibility. All subscales, except the peer support subscale, showed small DTF whereas the peer support subscale showed moderate DTF. Correlations showed that all the student resilience subscales were negatively associated with mental health difficulties, global subjective distress and impact on health. Random effects linear regression models showed that family connection, self-esteem, problem solving and peer support were negatively associated with all the mental health outcomes. The findings suggest that the SRS is a valid measure assessing these relevant protective factors, thereby serving as a valuable tool in resilience and mental health research.

  11. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  12. Encrypted Objects and Decryption Processes: Problem-Solving with Functions in a Learning Environment Based on Cryptography

    ERIC Educational Resources Information Center

    White, Tobin

    2009-01-01

    This paper introduces an applied problem-solving task, set in the context of cryptography and embedded in a network of computer-based tools. This designed learning environment engaged students in a series of collaborative problem-solving activities intended to introduce the topic of functions through a set of linked representations. In a…

  13. A Structural Model Related to the Understanding of the Concept of Function: Definition and Problem Solving

    ERIC Educational Resources Information Center

    Panaoura, Areti; Michael-Chrysanthou, Paraskevi; Gagatsis, Athanasios; Elia, Iliada; Philippou, Andreas

    2017-01-01

    This article focuses on exploring students' understanding of the concept of function concerning three main aspects: secondary students' ability to (1) define the concept of function and present examples of functions, (2) solve tasks which asked them to recognize and interpret the concept of function presented in different forms of representation,…

  14. Tribological advancements for reliable wind turbine performance.

    PubMed

    Kotzalas, Michael N; Doll, Gary L

    2010-10-28

    Wind turbines have had various limitations to their mechanical system reliability owing to tribological problems over the past few decades. While several studies show that turbines are becoming more reliable, it is still not at an overall acceptable level to the operators based on their current business models. Data show that the electrical components are the most problematic; however, the parts are small, thus easy and inexpensive to replace in the nacelle, on top of the tower. It is the tribological issues that receive the most attention as they have higher costs associated with repair or replacement. These include the blade pitch systems, nacelle yaw systems, main shaft bearings, gearboxes and generator bearings, which are the focus of this review paper. The major tribological issues in wind turbines and the technological developments to understand and solve them are discussed within. The study starts with an overview of fretting corrosion, rolling contact fatigue, and frictional torque of the blade pitch and nacelle yaw bearings, and references to some of the recent design approaches applied to solve them. Also included is a brief overview into lubricant contamination issues in the gearbox and electric current discharge or arcing damage of the generator bearings. The primary focus of this review is the detailed examination of main shaft spherical roller bearing micropitting and gearbox bearing scuffing, micropitting and the newer phenomenon of white-etch area flaking. The main shaft and gearbox are integrally related and are the most commonly referred to items involving expensive repair costs and downtime. As such, the latest research and developments related to the cause of the wear and damage modes and the technologies used or proposed to solve them are presented.

  15. [Problem solving abilities of nursing students: the experience of the bachelor degree course in nursing at the University of Udine].

    PubMed

    Bulfone, Giampiera; Galletti, Caterina; Vellone, Ercole; Zanini, Antonietta; Quattrin, Rosanna

    2008-01-01

    The process nurses adopt to solve the patients' problems is known as "Problem Solving" in the literature. Problem Solving Abilities include Diagnostic Reasoning, Prognostic Judgment and Decision Making. Nursing students apply the Problem Solving to the Nursing Process that is the mental and operative approach that nurses use to plan the nursing care. The purpose of the present study is to examine if there is a positive relationship between the number of Educational Tutorial Strategies (Briefing, Debriefing and Discussion according to the Objective Structured Clinical Examination Methodology) used for nursing students and their learning of Problem Solving Abilities (Diagnostic Reasoning, Prognostic Judgment and Decision Making). The study design was retrospective, descriptive and comparative. The Problem Solving Instrument, specifically developed for this study and proved for its reliability and validity, was used to collect the data from a sample of 106 nursing care plans elaborated by the second-year students of the Bachelor Degree in Nursing of the University of Udine. Nursing care plans were elaborated during three times consecutively, after students had participated in different Educational Tutorial Strategies. Results showed that the more the students took part in a higher number of Educational Tutorial Strategies the more they significantly increased their Problem Solving Abilities. The results demonstrate that it is important to use Educational Tutorial Strategies in the nursing education to teach skills.

  16. A system methodology for optimization design of the structural crashworthiness of a vehicle subjected to a high-speed frontal crash

    NASA Astrophysics Data System (ADS)

    Xia, Liang; Liu, Weiguo; Lv, Xiaojiang; Gu, Xianguang

    2018-04-01

    The structural crashworthiness design of vehicles has become an important research direction to ensure the safety of the occupants. To effectively improve the structural safety of a vehicle in a frontal crash, a system methodology is presented in this study. The surrogate model of Online support vector regression (Online-SVR) is adopted to approximate crashworthiness criteria and different kernel functions are selected to enhance the accuracy of the model. The Online-SVR model is demonstrated to have the advantages of solving highly nonlinear problems and saving training costs, and can effectively be applied for vehicle structural crashworthiness design. By combining the non-dominated sorting genetic algorithm II and Monte Carlo simulation, both deterministic optimization and reliability-based design optimization (RBDO) are conducted. The optimization solutions are further validated by finite element analysis, which shows the effectiveness of the RBDO solution in the structural crashworthiness design process. The results demonstrate the advantages of using RBDO, resulting in not only increased energy absorption and decreased structural weight from a baseline design, but also a significant improvement in the reliability of the design.

  17. [Problem-solving strategies and marital satisfaction].

    PubMed

    Kriegelewicz, Olga

    2006-01-01

    This study investigated the relation between problem-solving strategies in the marital conflict and marital satisfaction. Four problem-solving strategies (Dialogue, Loyalty, Escalation of conflict and Withdrawal) were measured by the Problem-Solving Strategies Inventory, in two versions: self-report and report of partners' perceived behaviour. This measure refers to the concept of Rusbult, Johnson and Morrow, and meets high standards of reliability (alpha Cronbach from alpha = 0.78 to alpha = 0.94) and validity. Marital satisfaction was measured by Marriage Success Scale. The sample was composed of 147 marital couples. The study revealed that satisfied couples, in comparison with non-satisfied couples, tend to use constructive problem-solving strategies (Dialogue and Loyalty). They rarely use destructive strategies like Escalation of conflict or Withdrawal. Dialogue is the strategy connected with satisfaction in a most positive manner. These might be very important guidelines to couples' psychotherapy. Loyalty to oneself is a significant positive predictor of male satisfaction is also own Loyalty. The study shows that constructive attitudes are the most significant predictors of marriage satisfaction. It is therefore worth concentrating mostly on them in the psychotherapeutic process instead of eliminating destructive attitudes.

  18. An overview of the phase-modular fault tree approach to phased mission system analysis

    NASA Technical Reports Server (NTRS)

    Meshkat, L.; Xing, L.; Donohue, S. K.; Ou, Y.

    2003-01-01

    We look at how fault tree analysis (FTA), a primary means of performing reliability analysis of PMS, can meet this challenge in this paper by presenting an overview of the modular approach to solving fault trees that represent PMS.

  19. Early Intervention To Prevent Violence.

    ERIC Educational Resources Information Center

    Lumsden, Linda

    2000-01-01

    This publication summarizes five works exploring the key role schools can play in dealing with emotionally disturbed students, in part because teachers are more reliable sources of information about troubled youths. The importance of interpersonal cognitive problem-solving (ICPS) skills is analyzed in "Preventing Violence the Problem Solving…

  20. A Differential Evolution Algorithm Based on Nikaido-Isoda Function for Solving Nash Equilibrium in Nonlinear Continuous Games

    PubMed Central

    He, Feng; Zhang, Wei; Zhang, Guoqiang

    2016-01-01

    A differential evolution algorithm for solving Nash equilibrium in nonlinear continuous games is presented in this paper, called NIDE (Nikaido-Isoda differential evolution). At each generation, parent and child strategy profiles are compared one by one pairwisely, adapting Nikaido-Isoda function as fitness function. In practice, the NE of nonlinear game model with cubic cost function and quadratic demand function is solved, and this method could also be applied to non-concave payoff functions. Moreover, the NIDE is compared with the existing Nash Domination Evolutionary Multiplayer Optimization (NDEMO), the result showed that NIDE was significantly better than NDEMO with less iterations and shorter running time. These numerical examples suggested that the NIDE method is potentially useful. PMID:27589229

  1. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Numerical simulation of hull curved plate forming by electromagnetic force assisted line heating

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Wang, Shun; Liu, Yujun; Li, Rui; Liu, xiao

    2017-11-01

    Line heating is a common method in shipyards for forming of hull curved plate. The aluminum alloy plate is widely used in shipbuilding. To solve the problem of thick aluminum alloy plate forming with complex curved surface, a new technology named electromagnetic force assisted line heating(EFALH) was proposed in this paper. The FEM model of EFALH was established and the effect of electromagnetic force assisted forming was verified by self development equipment. Firstly, the solving idea of numerical simulation for EFALH was illustrated. Then, the coupled numerical simulation model of multi physical fields were established. Lastly, the reliability of the numerical simulation model was verified by comparing the experimental data. This paper lays a foundation for solving the forming problems of thick aluminum alloy curved plate in shipbuilding.

  3. Using the concrete-representational-abstract approach to support students with intellectual disability to solve change-making problems.

    PubMed

    Bouck, Emily; Park, Jiyoon; Nickell, Barb

    2017-01-01

    The Concrete-Representational-Abstract (CRA) instructional approach supports students with disabilities in mathematics. Yet, no research explores the use of the CRA approach to teach functional-based mathematics for this population and limited research explores the CRA approach for students who have a disability different from a learning disability, such as an intellectual disability. This study investigated the effects of using the CRA approach to teach middle school students in a self-contained mathematics class focused on functional-based mathematics to solve making change problems. Researchers used a multiple probe across participants design to determine if a functional relation existed between the CRA strategy and students' ability to solve making change problems. The study of consisted of five-to-eight baseline sessions, 9-11 intervention sessions, and two maintenance sessions for each student. Data were collected on percentage of making change problems students solved correctly. The CRA instructional strategy was effective in teaching all four participants to correctly solve the problems; a functional relation between the CRA approach and solving making change with coins problems across all participants was found. The CRA instructional approach can be used to support students with mild intellectual disability or severe learning disabilities in learning functional-based mathematics, such as purchasing skills (i.e., making change). Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Cavitating Propeller Performance in Inclined Shaft Conditions with OpenFOAM: PPTC 2015 Test Case

    NASA Astrophysics Data System (ADS)

    Gaggero, Stefano; Villa, Diego

    2018-05-01

    In this paper, we present our analysis of the non-cavitating and cavitating unsteady performances of the Potsdam Propeller Test Case (PPTC) in oblique flow. For our calculations, we used the Reynolds-averaged Navier-Stokes equation (RANSE) solver from the open-source OpenFOAM libraries. We selected the homogeneous mixture approach to solve for multiphase flow with phase change, using the volume of fluid (VoF) approach to solve the multiphase flow and modeling the mass transfer between vapor and water with the Schnerr-Sauer model. Comparing the model results with the experimental measurements collected during the Second Workshop on Cavitation and Propeller Performance - SMP'15 enabled our assessment of the reliability of the open-source calculations. Comparisons with the numerical data collected during the workshop enabled further analysis of the reliability of different flow solvers from which we produced an overview of recommended guidelines (mesh arrangements and solver setups) for accurate numerical prediction even in off-design conditions. Lastly, we propose a number of calculations using the boundary element method developed at the University of Genoa for assessing the reliability of this dated but still widely adopted approach for design and optimization in the preliminary stages of very demanding test cases.

  5. An overview of 5G network slicing architecture

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Wang, Xiaolei; Lv, Yingying

    2018-05-01

    With the development of mobile communication technology, the traditional single network model has been unable to meet the needs of users, and the demand for differentiated services is increasing. In order to solve this problem, the fifth generation of mobile communication technology came into being, and as one of the key technologies of 5G, network slice is the core technology of network virtualization and software defined network, enabling network slices to flexibly provide one or more network services according to users' needs[1]. Each slice can independently tailor the network functions according to the requirements of the business scene and the traffic model and manage the layout of the corresponding network resources, to improve the flexibility of network services and the utilization of resources, and enhance the robustness and reliability of the whole network [2].

  6. The Anti-RFI Design of Intelligent Electric Energy Meters with UHF RFID

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    In order to solve the existing artificial meter reading watt-hour meter industry is still slow and inventory of common problems, using the uhf radio frequency identification (RFID) technology and intelligent watt-hour meter depth fusion, which has a one-time read multiple tags, identification distance, high transmission rate, high reliability, etc, while retaining the original asset management functions, in order to ensure the uhf RFID and minimum impact on the operation of the intelligent watt-hour meter, proposed to improve the stability of the electric meter system while working at the same time, this paper designs the uhf RFID intelligent watt-hour meter radio frequency interference resistance, put forward to improve intelligent watt-hour meter electromagnetic compatibility design train of thought, and introduced its power and the hardware circuit design of printed circuit board, etc.

  7. [Application of CWT to extract characteristic monitoring parameters during spine surgery].

    PubMed

    Chen, Penghui; Wu, Baoming; Hu, Yong

    2005-10-01

    It is necessary to monitor intraoperative spinal function in order to prevent spinal neurological deficit during spine surgery. This study aims to extract characteristic electrophysiological monitoring parameters during surgical treatment of scoliosis. The problem, "the monitoring parameters in time domain are of great variability and are sensitive to noise", may also be solved in this study. By use of continuous wavelet transform to analyze the intraoperative cortical somatosensory evoked potential (CSEP), three new characteristic monitoring parameters in time-frequency domain (TFD) are extracted. The results indicate that the variability of CSEP characteristic parameters in TFD is lower than the variability of those in time domain. Therefore, the TFD characteristic monitoring parameters are more stable and reliable parameters of latency and amplitude in time domain. The application of TFD monitoring parameters during spine surgery may avoid spinal injury effectively.

  8. Increasing the reliability of labor of railroad engineers

    NASA Technical Reports Server (NTRS)

    Genes, V. S.; Madiyevskiy, Y. M.

    1975-01-01

    It has been shown that the group of problems related to temporary overloads still require serious development with respect to further automating the basic control operation - programmed selection of speed and braking. The problem of systems for warning the engineer about the condition of the unseen track segments remains a very serious one. Systems of hygenic support of the engineer also require constructive development. The problems of ensuring the reliability of work of engineers in periods of low information load, requiring motor acts, can basically be considered theoretically solved.

  9. Inductive System for Reliable Magnesium Level Detection in a Titanium Reduction Reactor

    NASA Astrophysics Data System (ADS)

    Krauter, Nico; Eckert, Sven; Gundrum, Thomas; Stefani, Frank; Wondrak, Thomas; Frick, Peter; Khalilov, Ruslan; Teimurazov, Andrei

    2018-05-01

    The determination of the Magnesium level in a Titanium reduction retort by inductive methods is often hampered by the formation of Titanium sponge rings which disturb the propagation of electromagnetic signals between excitation and receiver coils. We present a new method for the reliable identification of the Magnesium level which explicitly takes into account the presence of sponge rings with unknown geometry and conductivity. The inverse problem is solved by a look-up-table method, based on the solution of the inductive forward problems for several tens of thousands parameter combinations.

  10. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  11. Methodological difficulties of conducting agroecological studies from a statistical perspective

    USDA-ARS?s Scientific Manuscript database

    Statistical methods for analysing agroecological data might not be able to help agroecologists to solve all of the current problems concerning crop and animal husbandry, but such methods could well help agroecologists to assess, tackle, and resolve several agroecological issues in a more reliable an...

  12. Modeling and optimization of actively Q-switched Nd-doped quasi-three-level laser

    NASA Astrophysics Data System (ADS)

    Yan, Renpeng; Yu, Xin; Li, Xudong; Chen, Deying; Gao, Jing

    2013-09-01

    The energy transfer upconversion and the ground state absorption are considered in solving the rate equations for an active Q-switched quasi-three-level laser. The dependence of output pulse characters on the laser parameters is investigated by solving the rate equations. The influence of the energy transfer upconversion on the pulsed laser performance is illustrated and discussed. By this model, the optimal parameters could be achieved for arbitrary quasi-three-level Q-switched lasers. An acousto-optical Q-switched Nd:YAG 946 nm laser is constructed and the reliability of the theoretical model is demonstrated.

  13. Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes

    NASA Astrophysics Data System (ADS)

    Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico

    2017-12-01

    Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.

  14. Some Solved Problems with the SLAC PEP-II B-Factory Beam-Position Monitor System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Ronald G.

    2000-05-05

    The Beam-Position Monitor (BPM) system for the SLAC PEP-II B-Factory has been in operation for over two years. Although the BPM system has met all of its specifications, several problems with the system have been identified and solved. The problems include errors and limitations in both the hardware and software. Solutions of such problems have led to improved performance and reliability. In this paper the authors report on this experience. The process of identifying problems is not at an end and they expect continued improvement of the BPM system.

  15. Numerical solution of the nonlinear Schrodinger equation by feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Shirvany, Yazdan; Hayati, Mohsen; Moradian, Rostam

    2008-12-01

    We present a method to solve boundary value problems using artificial neural networks (ANN). A trial solution of the differential equation is written as a feed-forward neural network containing adjustable parameters (the weights and biases). From the differential equation and its boundary conditions we prepare the energy function which is used in the back-propagation method with momentum term to update the network parameters. We improved energy function of ANN which is derived from Schrodinger equation and the boundary conditions. With this improvement of energy function we can use unsupervised training method in the ANN for solving the equation. Unsupervised training aims to minimize a non-negative energy function. We used the ANN method to solve Schrodinger equation for few quantum systems. Eigenfunctions and energy eigenvalues are calculated. Our numerical results are in agreement with their corresponding analytical solution and show the efficiency of ANN method for solving eigenvalue problems.

  16. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  17. On the problem of solving the optimization for continuous space based on information distribution function of ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Min, Huang; Na, Cai

    2017-06-01

    These years, ant colony algorithm has been widely used in solving the domain of discrete space optimization, while the research on solving the continuous space optimization was relatively little. Based on the original optimization for continuous space, the article proposes the improved ant colony algorithm which is used to Solve the optimization for continuous space, so as to overcome the ant colony algorithm’s disadvantages of searching for a long time in continuous space. The article improves the solving way for the total amount of information of each interval and the due number of ants. The article also introduces a function of changes with the increase of the number of iterations in order to enhance the convergence rate of the improved ant colony algorithm. The simulation results show that compared with the result in literature[5], the suggested improved ant colony algorithm that based on the information distribution function has a better convergence performance. Thus, the article provides a new feasible and effective method for ant colony algorithm to solve this kind of problem.

  18. A Naturalistic Study of Executive Function and Mathematical Problem-Solving

    ERIC Educational Resources Information Center

    Kotsopoulos, Donna; Lee, Joanne

    2012-01-01

    Our goal in this research was to understand the specific challenges middle-school students face when engaging in mathematical problem-solving by using executive function (i.e., shifting, updating, and inhibiting) of working memory as a functional construct for the analysis. Using modified talk-aloud protocols, real-time naturalistic analysis of…

  19. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  20. An empirical research on evaluating banks' credit assessment of corporate customers.

    PubMed

    Tsai, Sang-Bing; Li, Guodong; Wu, Chia-Huei; Zheng, Yuxiang; Wang, Jiangtao

    2016-01-01

    Under the rapid change of the global financial environment, the risk control of the credit granting is viewed as the foremost task to each bank. With the impact one by one from financial crisis and European debt crisis, the steady bank business is also facing the severe challenge. Banks approve the credits for their customers and then make money from the interest. Credit granting is not only the primary job but also the main source of income. The quality of credit granting concerns not just the reclaims of creditor's rights; it also affects the successful running of banks. To enhance the reliability and usefulness of bank credit risk assessment, we first will delve in the facets and indexes in the bank credit risk assessment. Then, we will examine the different dimensions of cause-effect relationships and correlations in the assessment process. Finally, the study focuses on how to raise the functions and benefits of the bank credit risk assessment. In those five credit risk evaluation dimensions, A "optional capability" and D "competitiveness" are of high relation and high prominence among those dimensions, influencing other items obviously. By actively focusing on these two dimensions and improving their credit risk assessment ability will solve the foremost problems and also solve other facets of credit risk assessment problems at the same time.

  1. Stress and reliability analyses of multilayered composite cylinder under thermal and mechanical loads

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua

    The coupling resulting from the mutual influence of material thermal and mechanical parameters is examined in the thermal stress analysis of a multilayered isotropic composite cylinder subjected to sudden axisymmetric external and internal temperature. The method of complex frequency response functions together with the Fourier transform technique is utilized. Because the coupling parameters for some composite materials, such as carbon-carbon, are very small, the effect of coupling is neglected in the orthotropic thermal stress analysis. The stress distributions in multilayered orthotropic cylinders subjected to sudden axisymmetric temperature loading combined with dynamic pressure as well as asymmetric temperature loading are also obtained. The method of Fourier series together with the Laplace transform is utilized in solving the heat conduction equation and thermal stress analysis. For brittle materials, like carbon-carbon composites, the strength variability is represented by two or three parameter Weibull distributions. The 'weakest link' principle which takes into account both the carbon-carbon composite cylinders. The complex frequency response analysis is performed on a multilayered orthotropic cylinder under asymmetrical thermal load. Both deterministic and random thermal stress and reliability analyses can be based on the results of this frequency response analysis. The stress and displacement distributions and reliability of rocket motors under static or dynamic line loads are analyzed by an elasticity approach. Rocket motors are modeled as long hollow multilayered cylinders with an air core, a thick isotropic propellant inner layer and a thin orthotropic kevlar-epoxy case. The case is treated as a single orthotropic layer or a ten layered orthotropic structure. Five material properties and the load are treated as random variable with normal distributions when the reliability of the rocket motor is analyzed by the first-order, second-moment method (FOSM).

  2. A neural network based artificial vision system for licence plate recognition.

    PubMed

    Draghici, S

    1997-02-01

    This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.

  3. Cognitive functioning and everyday problem solving in older adults.

    PubMed

    Burton, Catherine L; Strauss, Esther; Hultsch, David F; Hunter, Michael A

    2006-09-01

    The relationship between cognitive functioning and a performance-based measure of everyday problem-solving, the Everyday Problems Test (EPT), thought to index instrumental activities of daily living (IADL), was examined in 291 community-dwelling non-demented older adults. Performance on the EPT was found to vary according to age, cognitive status, and education. Hierarchical regression analyses revealed that, after adjusting for demographic and health variables, measures of cognitive functioning accounted for 23.6% of the variance in EPT performance. In particular, measures of global cognitive status, cognitive decline, speed of processing, executive functioning, episodic memory, and verbal ability were significant predictors of EPT performance. These findings suggest that cognitive functioning along with demographic variables are important determinants of everyday problem-solving.

  4. Optimized batteries for cars with dual electrical architecture

    NASA Astrophysics Data System (ADS)

    Douady, J. P.; Pascon, C.; Dugast, A.; Fossati, G.

    During recent years, the increase in car electrical equipment has led to many problems with traditional starter batteries (such as cranking failure due to flat batteries, battery cycling etc.). The main causes of these problems are the double function of the automotive battery (starter and service functions) and the difficulties in designing batteries well adapted to these two functions. In order to solve these problems a new concept — the dual-concept — has been developed with two separate batteries: one battery is dedicated to the starter function and the other is dedicated to the service function. Only one alternator charges the two batteries with a separation device between the two electrical circuits. The starter battery is located in the engine compartment while the service battery is located at the rear of the car. From the analysis of new requirements, battery designs have been optimized regarding the two types of functions: (i) a small battery with high specific power for the starting function; for this function a flooded battery with lead-calcium alloy grids and thin plates is proposed; (ii) for the service function, modified sealed gas-recombinant batteries with cycling and deep-discharge ability have been developed. The various advantages of the dual-concept are studied in terms of starting reliability, battery weight, and voltage supply. The operating conditions of the system and several dual electrical architectures have also been studied in the laboratory and the car. The feasibility of the concept is proved.

  5. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  6. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  7. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  8. Aiding the search: Examining individual differences in multiply-constrained problem solving.

    PubMed

    Ellis, Derek M; Brewer, Gene A

    2018-07-01

    Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.

  9. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  10. Highest integration in microelectronics: Development of digital ASICs for PARS3-LR

    NASA Astrophysics Data System (ADS)

    Scholler, Peter; Vonlutz, Rainer

    Essential electronic system components by PARS3-LR, show high requirements in calculation power, power consumption and reliability, by immediately increasing integration thicknesses. These problems are solved by using integrated circuits, developed by LSI LOGIC, that uses the technical and economic advantages of this leading edge technology.

  11. Development of Critical Spatial Thinking through GIS Learning

    ERIC Educational Resources Information Center

    Kim, Minsung; Bednarz, Robert

    2013-01-01

    This study developed an interview-based critical spatial thinking oral test and used the test to investigate the effects of Geographic Information System (GIS) learning on three components of critical spatial thinking: evaluating data reliability, exercising spatial reasoning, and assessing problem-solving validity. Thirty-two students at a large…

  12. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  13. Construct Validation of the Physics Metacognition Inventory

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Farley, John

    2013-01-01

    The 24-item Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. Items were classified into eight subcomponents subsumed under two broader components: knowledge of cognition and regulation of cognition. The students' scores on the inventory were found to be reliable and related to students'…

  14. Problem-solving skills, parent-adolescent communication, dyadic functioning, and distress among adolescents with cancer.

    PubMed

    Viola, Adrienne; Taggi-Pinto, Alison; Sahler, Olle Jane Z; Alderfer, Melissa A; Devine, Katie A

    2018-05-01

    Some adolescents with cancer report distress and unmet needs. Guided by the disability-stress-coping model, we evaluated associations among problem-solving skills, parent-adolescent cancer-related communication, parent-adolescent dyadic functioning, and distress in adolescents with cancer. Thirty-nine adolescent-parent dyads completed measures of these constructs. Adolescents were 14-20 years old on treatment or within 1 year of completing treatment. Better problem-solving skills were correlated with lower adolescent distress (r = -0.70, P < 0.001). Adolescent-reported cancer-related communication problems and dyadic functioning were not significantly related to adolescent distress (rs < 0.18). Future work should examine use of problem-solving interventions to decrease distress for adolescents with cancer. © 2018 Wiley Periodicals, Inc.

  15. Solving the Hamilton-Jacobi equation for general relativity

    NASA Astrophysics Data System (ADS)

    Parry, J.; Salopek, D. S.; Stewart, J. M.

    1994-03-01

    We demonstrate a systematic method for solving the Hamilton-Jacobi equation for general relativity with the inclusion of matter fields. The generating functional is expanded in a series of spatial gradients. Each term is manifestly invariant under reparametrizations of the spatial coordinates (``gauge invariant''). At each order we solve the Hamiltonian constraint using a conformal transformation of the three-metric as well as a line integral in superspace. This gives a recursion relation for the generating functional which then may be solved to arbitrary order simply by functionally differentiating previous orders. At fourth order in spatial gradients we demonstrate solutions for irrotational dust as well as for a scalar field. We explicitly evolve the three-metric to the same order. This method can be used to derive the Zel'dovich approximation for general relativity.

  16. Data Sufficiency Assessment and Pumping Test Design for Groundwater Prediction Using Decision Theory and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    McPhee, J.; William, Y. W.

    2005-12-01

    This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system

  17. Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory

    NASA Astrophysics Data System (ADS)

    Usselman, Austin

    We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.

  18. Family problem solving interactions and 6-month symptomatic and functional outcomes in youth at ultra-high risk for psychosis and with recent onset psychotic symptoms: a longitudinal study.

    PubMed

    O'Brien, Mary P; Zinberg, Jamie L; Ho, Lorena; Rudd, Alexandra; Kopelowicz, Alex; Daley, Melita; Bearden, Carrie E; Cannon, Tyrone D

    2009-02-01

    This study prospectively examined the relationship between social problem solving behavior exhibited by youths at ultra-high risk for psychosis (UHR) and with recent onset psychotic symptoms and their parents during problem solving discussions, and youths' symptoms and social functioning six months later. Twenty-seven adolescents were administered the Structured Interview for Prodromal Syndromes and the Strauss-Carpenter Social Contact Scale at baseline and follow-up assessment. Primary caregivers participated with youth in a ten minute discussion that was videotaped, transcribed, and coded for how skillful participants were in defining problems, generating solutions, and reaching resolution, as well as how constructive and/or conflictual they were during the interaction. Controlling for social functioning at baseline, adolescents' skillful problem solving and constructive communication, and parents' constructive communication, were associated with youths' enhanced social functioning six months later. Controlling for symptom severity at baseline, we found that there was a positive association between adolescents' conflictual communications at baseline and an increase in positive symptoms six months later. Taken together, findings from this study provide support for further research into the possibility that specific family interventions, such as problem solving and communication skills training, may improve the functional prognosis of at-risk youth, especially in terms of their social functioning.

  19. Family problem solving interactions and 6-month symptomatic and functional outcomes in youth at ultra-high risk for psychosis and with recent onset psychotic symptoms: A longitudinal study

    PubMed Central

    O'Brien, Mary P.; Zinberg, Jamie L.; Ho, Lorena; Rudd, Alexandra; Kopelowicz, Alex; Daley, Melita; Bearden, Carrie E.; Cannon, Tyrone D.

    2009-01-01

    This study prospectively examined the relationship between social problem solving behavior exhibited by youths at ultra-high risk for psychosis (UHR) and with recent onset psychotic symptoms and their parents during problem solving discussions, and youths' symptoms and social functioning six months later. Twenty-seven adolescents were administered the Structured Interview for Prodromal Syndromes and the Strauss-Carpenter Social Contact Scale at baseline and follow-up assessment. Primary caregivers participated with youth in a ten minute discussion that was videotaped, transcribed, and coded for how skillful participants were in defining problems, generating solutions, and reaching resolution, as well as how constructive and/or conflictual they were during the interaction. Controlling for social functioning at baseline, adolescents' skillful problem solving and constructive communication, and parents' constructive communication, were associated with youths' enhanced social functioning six months later. Controlling for symptom severity at baseline, we found that there was a positive association between adolescents' conflictual communications at baseline and an increase in positive symptoms six months later. Taken together, findings from this study provide support for further research into the possibility that specificfamily interventions, such as problem solving and communication skills training, may improve the functional prognosis of at-risk youth, especially in terms of their social functioning. PMID:18996681

  20. Math and numeracy in young adults with spina bifida and hydrocephalus.

    PubMed

    Dennis, Maureen; Barnes, Marcia

    2002-01-01

    The developmental stability of poor math skill was studied in 31 young adults with spina bifida and hydrocephalus (SBH), a neurodevelopmental disorder involving malformations of the brain and spinal cord. Longitudinally, individuals with poor math problem solving as children grew into adults with poor problem solving and limited functional numeracy. As a group, young adults with SBH had poor computation accuracy, computation speed, problem solving, a ndfunctional numeracy. Computation accuracy was related to a supporting cognitive system (working memory for numbers), and functional numeracy was related to one medical history variable (number of lifetime shunt revisions). Adult functional numeracy, but not functional literacy, was predictive of higher levels of social, personal, and community independence.

  1. Exact solution of matricial Φ23 quantum field theory

    NASA Astrophysics Data System (ADS)

    Grosse, Harald; Sako, Akifumi; Wulkenhaar, Raimar

    2017-12-01

    We apply a recently developed method to exactly solve the Φ3 matrix model with covariance of a two-dimensional theory, also known as regularised Kontsevich model. Its correlation functions collectively describe graphs on a multi-punctured 2-sphere. We show how Ward-Takahashi identities and Schwinger-Dyson equations lead in a special large- N limit to integral equations that we solve exactly for all correlation functions. The solved model arises from noncommutative field theory in a special limit of strong deformation parameter. The limit defines ordinary 2D Schwinger functions which, however, do not satisfy reflection positivity.

  2. Specific Features in Measuring Particle Size Distributions in Highly Disperse Aerosol Systems

    NASA Astrophysics Data System (ADS)

    Zagaynov, V. A.; Vasyanovich, M. E.; Maksimenko, V. V.; Lushnikov, A. A.; Biryukov, Yu. G.; Agranovskii, I. E.

    2018-06-01

    The distribution of highly dispersed aerosols is studied. Particular attention is given to the diffusion dynamic approach, as it is the best way to determine particle size distribution. It shown that the problem can be divided into two steps: directly measuring particle penetration through diffusion batteries and solving the inverse problem (obtaining a size distribution from the measured penetrations). No reliable way of solving the so-called inverse problem is found, but it can be done by introducing a parametrized size distribution (i.e., a gamma distribution). The integral equation is therefore reduced to a system of nonlinear equations that can be solved by elementary mathematical means. Further development of the method requires an increase in sensitivity (i.e., measuring the dimensions of molecular clusters with radioactive sources, along with the activity of diffusion battery screens).

  3. Reliable use of determinants to solve nonlinear structural eigenvalue problems efficiently

    NASA Technical Reports Server (NTRS)

    Williams, F. W.; Kennedy, D.

    1988-01-01

    The analytical derivation, numerical implementation, and performance of a multiple-determinant parabolic interpolation method (MDPIM) for use in solving transcendental eigenvalue (critical buckling or undamped free vibration) problems in structural mechanics are presented. The overall bounding, eigenvalue-separation, qualified parabolic interpolation, accuracy-confirmation, and convergence-recovery stages of the MDPIM are described in detail, and the numbers of iterations required to solve sample plane-frame problems using the MDPIM are compared with those for a conventional bisection method and for the Newtonian method of Simpson (1984) in extensive tables. The MDPIM is shown to use 31 percent less computation time than bisection when accuracy of 0.0001 is required, but 62 percent less when accuracy of 10 to the -8th is required; the time savings over the Newtonian method are about 10 percent.

  4. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  5. Social problem-solving plus psychoeducation for adults with personality disorder: pragmatic randomised controlled trial.

    PubMed

    Huband, Nick; McMurran, Mary; Evans, Chris; Duggan, Conor

    2007-04-01

    Social problem-solving therapy may be relevant in the treatment of personality disorder, although assessments of its effectiveness are uncommon. To determine the effectiveness of a problem-solving intervention for adults with personality disorder in the community under conditions resembling routine clinical practice. Participants were randomly allocated to brief psychoeducation plus 16 problem-solving group sessions (n=87) or to waiting-list control (n=89). Primary outcome was comparison of scores on the Social Problem Solving Inventory and the Social Functioning Questionnaire between intervention and control arms at the conclusion of treatment, on average at 24 weeks after randomisation. In intention-to-treat analysis, those allocated to intervention showed significantly better problem-solving skills (P<0.001), higher overall social functioning (P=0.031) and lower anger expression (P=0.039) compared with controls. No significant differences were found on use of services during the intervention period. Problem-solving plus psychoeducation has potential as a preliminary intervention for adults with personality disorder.

  6. THERMO-HYDRO-MECHANICAL MODELING OF WORKING FLUID INJECTION AND THERMAL ENERGY EXTRACTION IN EGS FRACTURES AND ROCK MATRIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Podgorney; Chuan Lu; Hai Huang

    2012-01-01

    Development of enhanced geothermal systems (EGS) will require creation of a reservoir of sufficient volume to enable commercial-scale heat transfer from the reservoir rocks to the working fluid. A key assumption associated with reservoir creation/stimulation is that sufficient rock volumes can be hydraulically fractured via both tensile and shear failure, and more importantly by reactivation of naturally existing fractures (by shearing), to create the reservoir. The advancement of EGS greatly depends on our understanding of the dynamics of the intimately coupled rock-fracture-fluid-heat system and our ability to reliably predict how reservoirs behave under stimulation and production. Reliable performance predictions ofmore » EGS reservoirs require accurate and robust modeling for strongly coupled thermal-hydrological-mechanical (THM) processes. Conventionally, these types of problems have been solved using operator-splitting methods, usually by coupling a subsurface flow and heat transport simulators with a solid mechanics simulator via input files. An alternative approach is to solve the system of nonlinear partial differential equations that govern multiphase fluid flow, heat transport, and rock mechanics simultaneously, using a fully coupled, fully implicit solution procedure, in which all solution variables (pressure, enthalpy, and rock displacement fields) are solved simultaneously. This paper describes numerical simulations used to investigate the poro- and thermal- elastic effects of working fluid injection and thermal energy extraction on the properties of the fractures and rock matrix of a hypothetical EGS reservoir, using a novel simulation software FALCON (Podgorney et al., 2011), a finite element based simulator solving fully coupled multiphase fluid flow, heat transport, rock deformation, and fracturing using a global implicit approach. Investigations are also conducted on how these poro- and thermal-elastic effects are related to fracture permeability evolution.« less

  7. Evaluating the Use of Problem-Based Video Podcasts to Teach Mathematics in Higher Education

    ERIC Educational Resources Information Center

    Kay, Robin; Kletskin, Ilona

    2012-01-01

    Problem-based video podcasts provide short, web-based, audio-visual explanations of how to solve specific procedural problems in subject areas such as mathematics or science. A series of 59 problem-based video podcasts covering five key areas (operations with functions, solving equations, linear functions, exponential and logarithmic functions,…

  8. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  9. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2017-11-05

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  10. The generalized quadratic knapsack problem. A neuronal network approach.

    PubMed

    Talaván, Pedro M; Yáñez, Javier

    2006-05-01

    The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.

  11. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  12. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. The Effect of Metacognitive Instruction on Problem Solving Skills in Iranian Students of Health Sciences

    PubMed Central

    Safari, Yahya; Meskini, Habibeh

    2016-01-01

    Background: Learning requires application of such processes as planning, supervision, monitoring and reflection that are included in the metacognition. Studies have shown that metacognition is associated with problem solving skills. The current research was conducted to investigate the impact of metacognitive instruction on students’ problem solving skills. Methods: The study sample included 40 students studying in the second semester at Kermanshah University of Medical Sciences, 2013-2014. They were selected through convenience sampling technique and were randomly assigned into two equal groups of experimental and control. For the experimental group, problem solving skills were taught through metacognitive instruction during ten two-hour sessions and for the control group, problem solving skills were taught via conventional teaching method. The instrument for data collection included problem solving inventory (Heppner, 1988), which was administered before and after instruction. The validity and reliability of the questionnaire had been previously confirmed. The collected data were analyzed by descriptive statistics, mean and standard deviation and the hypotheses were tested by t-test and ANCOVA. Results: The findings of the posttest showed that the total mean scores of problem solving skills in the experimental and control groups were 151.90 and 101.65, respectively, indicating a significant difference between them (p<0.001). This difference was also reported to be statistically significant between problem solving skills and its components, including problem solving confidence, orientation-avoidance coping style and personal control (p<0.001). No significant difference, however, was found between the students’ mean scores in terms of gender and major. Conclusion: Since metacognitive instruction has positive effects on students’ problem solving skills and is required to enhance academic achievement, metacognitive strategies are recommended to be taught to the students. PMID:26234970

  14. The Effect of Metacognitive Instruction on Problem Solving Skills in Iranian Students of Health Sciences.

    PubMed

    Safari, Yahya; Meskini, Habibeh

    2015-05-17

    Learning requires application of such processes as planning, supervision, monitoring and reflection that are included in the metacognition. Studies have shown that metacognition is associated with problem solving skills. The current research was conducted to investigate the impact of metacognitive instruction on students' problem solving skills. The study sample included 40 students studying in the second semester at Kermanshah University of Medical Sciences, 2013-2014. They were selected through convenience sampling technique and were randomly assigned into two equal groups of experimental and control. For the experimental group, problem solving skills were taught through metacognitive instruction during ten two-hour sessions and for the control group, problem solving skills were taught via conventional teaching method. The instrument for data collection included problem solving inventory (Heppner, 1988), which was administered before and after instruction. The validity and reliability of the questionnaire had been previously confirmed. The collected data were analyzed by descriptive statistics, mean and standard deviation and the hypotheses were tested by t-test and ANCOVA. The findings of the posttest showed that the total mean scores of problem solving skills in the experimental and control groups were 151.90 and 101.65, respectively, indicating a significant difference between them (p<0.001). This difference was also reported to be statistically significant between problem solving skills and its components, including problem solving confidence, orientation-avoidance coping style and personal control (p<0.001). No significant difference, however, was found between the students' mean scores in terms of gender and major. Since metacognitive instruction has positive effects on students' problem solving skills and is required to enhance academic achievement, metacognitive strategies are recommended to be taught to the students.

  15. Computer fluid dynamics (CFD) study of a micro annular gear pump

    NASA Astrophysics Data System (ADS)

    Stan, Liviu-Constantin; Cǎlimǎnescu, Ioan

    2016-12-01

    Micro technology makes it possible to design products simply, efficiently and sustainably and at the same time, opens up the creation of new functionalities. The field of application of the micro annular gear pumps lies in analytical instrumentation, mechanical and plant engineering, chemical and pharmaceutical process engineering as well as in new markets like fuel cells or biotechnology, organic electronics or aerospace. The purpose of this paper is to investigate by using the powerful ANSYS 16 CFX module the hydrodynamic behavior of an 8/9 teeth annular gear pump. The solving of solids evolving inside fluids was very cumbersome until the advent of the Ansys immersed solid technology. By deploying this technology for very special topics like the CFD analysis of Micro annular gear pumps, credible and reliable results may be pulled leading thus the way for more in depth studies like geometrical a functional optimization of the existing devices. This paper is a valuable guide for the professionals working in the design field of micro pumps handing them a new and powerful design tool.

  16. Are resting state spectral power measures related to executive functions in healthy young adults?

    PubMed

    Gordon, Shirley; Todder, Doron; Deutsch, Inbal; Garbi, Dror; Getter, Nir; Meiran, Nachshon

    2018-01-08

    Resting-state electroencephalogram (rsEEG) has been found to be associated with psychopathology, intelligence, problem solving, academic performance and is sometimes used as a supportive physiological indicator of enhancement in cognitive training interventions (e.g. neurofeedback, working memory training). In the current study, we measured rsEEG spectral power measures (relative power, between-band ratios and asymmetry) in one hundred sixty five young adults who were also tested on a battery of executive function (EF). We specifically focused on upper Alpha, Theta and Beta frequency bands given their putative role in EF. Our indices enabled finding correlations since they had decent-to-excellent internal and retest reliability and very little range restriction relative to a nation-wide representative large sample. Nonetheless, Bayesian statistical inference indicated support for the null hypothesis concerning lack of monotonic correlation between EF and rsEEG spectral power measures. Therefore, we conclude that, contrary to the quite common interpretation, these rsEEG spectral power measures do not indicate individual differences in the measured EF abilities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Current-Voltage and Floating-Potential characteristics of cylindrical emissive probes from a full-kinetic model based on the orbital motion theory

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Sánchez-Arriaga, Gonzalo

    2018-02-01

    To model the sheath structure around an emissive probe with cylindrical geometry, the Orbital-Motion theory takes advantage of three conserved quantities (distribution function, transverse energy, and angular momentum) to transform the stationary Vlasov-Poisson system into a single integro-differential equation. For a stationary collisionless unmagnetized plasma, this equation describes self-consistently the probe characteristics. By solving such an equation numerically, parametric analyses for the current-voltage (IV) and floating-potential (FP) characteristics can be performed, which show that: (a) for strong emission, the space-charge effects increase with probe radius; (b) the probe can float at a positive potential relative to the plasma; (c) a smaller probe radius is preferred for the FP method to determine the plasma potential; (d) the work function of the emitting material and the plasma-ion properties do not influence the reliability of the floating-potential method. Analytical analysis demonstrates that the inflection point of an IV curve for non-emitting probes occurs at the plasma potential. The flat potential is not a self-consistent solution for emissive probes.

  18. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  19. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    NASA Astrophysics Data System (ADS)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.

  20. An Algebraic Approach for Solving Quadratic Inequalities

    ERIC Educational Resources Information Center

    Mahmood, Munir; Al-Mirbati, Rudaina

    2017-01-01

    In recent years most text books utilise either the sign chart or graphing functions in order to solve a quadratic inequality of the form ax[superscript 2] + bx + c < 0 This article demonstrates an algebraic approach to solve the above inequality. To solve a quadratic inequality in the form of ax[superscript 2] + bx + c < 0 or in the…

  1. Impacts of Learning Inventive Problem-Solving Principles: Students' Transition from Systematic Searching to Heuristic Problem Solving

    ERIC Educational Resources Information Center

    Barak, Moshe

    2013-01-01

    This paper presents the outcomes of teaching an inventive problem-solving course in junior high schools in an attempt to deal with the current relative neglect of fostering students' creativity and problem-solving capabilities in traditional schooling. The method involves carrying out systematic manipulation with attributes, functions and…

  2. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  3. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    NASA Technical Reports Server (NTRS)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  4. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  5. Decentralized control

    NASA Technical Reports Server (NTRS)

    Steffen, Chris

    1990-01-01

    An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.

  6. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1992-01-01

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  7. Calculation and word problem-solving skills in primary grades - Impact of cognitive abilities and longitudinal interrelations with task-persistent behaviour.

    PubMed

    Jõgi, Anna-Liisa; Kikas, Eve

    2016-06-01

    Primary school math skills form a basis for academic success down the road. Different math skills have different antecedents and there is a reason to believe that more complex math tasks require better self-regulation. The study aimed to investigate longitudinal interrelations of calculation and problem-solving skills, and task-persistent behaviour in Grade 1 and Grade 3, and the effect of non-verbal intelligence, linguistic abilities, and executive functioning on math skills and task persistence. Participants were 864 students (52.3% boys) from 33 different schools in Estonia. Students were tested twice - at the end of Grade1 and at the end of Grade 3. Calculation and problem-solving skills, and teacher-rated task-persistent behaviour were measured at both time points. Non-verbal intelligence, linguistic abilities, and executive functioning were measured in Grade 1. Cross-lagged structural equation modelling indicated that calculation skills depend on previous math skills and linguistic abilities, while problem-solving skills require also non-verbal intelligence, executive functioning, and task persistence. Task-persistent behaviour in Grade 3 was predicted by previous problem-solving skills, linguistic abilities, and executive functioning. Gender and mother's educational level were added as covariates. The findings indicate that math skills and self-regulation are strongly related in primary grades and that solving complex tasks requires executive functioning and task persistence from children. Findings support the idea that instructional practices might benefit from supporting self-regulation in order to gain domain-specific, complex skill achievement. © 2015 The British Psychological Society.

  8. Research and exploration of product innovative design for function

    NASA Astrophysics Data System (ADS)

    Wang, Donglin; Wei, Zihui; Wang, Youjiang; Tan, Runhua

    2009-07-01

    Products innovation is under the prerequisite of realizing the new function, the realization of the new function must solve the contradiction. A new process model of new product innovative design was proposed based on Axiomatic Design (AD) Theory and Functional Structure Analysis (FSA), imbedded Principle of Solving Contradiction. In this model, employ AD Theory to guide FSA, determine the contradiction for the realization of the principle solution. To provide powerful support for innovative design tools in principle solution, Principle of Solving Contradiction in the model were imbedded, so as to boost up the innovation of principle solution. As a case study, an innovative design of button battery separator paper punching machine has been achieved with application of the proposed model.

  9. The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.

    1989-01-01

    The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.

  10. A neuro approach to solve fuzzy Riccati differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Kumaresan, N.; Kamali, M. Z. M.; Ratnavelu, Kurunathan

    2015-10-01

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  11. A neuro approach to solve fuzzy Riccati differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahrir, Mohammad Shazri, E-mail: mshazri@gmail.com; Telekom Malaysia, R&D TM Innovation Centre, LingkaranTeknokrat Timur, 63000 Cyberjaya, Selangor; Kumaresan, N., E-mail: drnk2008@gmail.com

    There are many applications of optimal control theory especially in the area of control systems in engineering. In this paper, fuzzy quadratic Riccati differential equation is estimated using neural networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). The solution can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that NN approach shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over RK4.

  12. A stellar tracking reference system

    NASA Technical Reports Server (NTRS)

    Klestadt, B.

    1971-01-01

    A stellar attitude reference system concept for satellites was studied which promises to permit continuous precision pointing of payloads with accuracies of 0.001 degree without the use of gyroscopes. It is accomplished with the use of a single, clustered star tracker assembly mounted on a non-orthogonal, two gimbal mechanism, driven so as to unwind satellite orbital and orbit precession rates. A set of eight stars was found which assures the presence of an adequate inertial reference on a continuous basis in an arbitrary orbit. Acquisition and operational considerations were investigated and inherent reference redundancy/reliability was established. Preliminary designs for the gimbal mechanism, its servo drive, and the star tracker cluster with its associated signal processing were developed for a baseline sun-synchronous, noon-midnight orbit. The functions required of the onboard computer were determined and the equations to be solved were found. In addition detailed error analyses were carried out, based on structural, thermal and other operational considerations.

  13. A general higher-order nonlocal couple stress based beam model for vibration analysis of porous nanocrystalline nanobeams

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Barati, Mohammad Reza

    2017-12-01

    This paper develops a higher order refined beam model with a parabolic shear strain function for vibration analysis of porous nanocrystalline nanobeams based on nonlocal couple stress theory. Nanocrystalline nanobeam is composed from three phases which are nano-grains, nano-voids and interface. Nano-voids or porosities inside the material have a stiffness-softening impact on the nanobeam. Nonlocal elasticity theory of Eringen is applied in analysis of nanocrystalline nanobeams for the first time. Also, modified couple stress theory is employed to capture grains rigid rotations. The governing equations obtained from Hamilton's principle are solved applying an analytical approach which satisfies various boundary conditions. The reliability of present approach is verified by comparing obtained results with those provided in literature. Finally the influences of nonlocal parameter, couple stress, grain size, porosities and shear deformation on the vibration characteristics of nanocrystalline nanobeams are explored.

  14. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  15. Customer relationship management implementation in the small and medium enterprise

    NASA Astrophysics Data System (ADS)

    Nugroho, Agus; Suharmanto, Agus; Masugino

    2018-03-01

    To win the global competition and sustain the business, small and medium enterprise shall implement a reliable information technology application to support their customer data base, production and sales as well as marketing management. This paper addresses the implementation of Customer Relationship Management (CRM) in small and medium enterprise, CV. Densuko Jaya. It is a small and medium enterprises in Semarang, Central Java, Republic of Indonesia deal with rubber processing industry supply chain. ADDIE model utilized in study to setup the CRM functionality at these enterprises. The aim of the authors is to present the benefits resulting from the application of CRM technologies at these enterprises to solve their chronicle issues in the field of integrated customer data base, production management process and sales automation in order to boost their business in the near future. Training and coaching have been delivered to the enterprises staffs and management to ensure that they can execute the system.

  16. Magnetohydrodynamics effect on convective boundary layer flow and heat transfer of viscoelastic micropolar fluid past a sphere

    NASA Astrophysics Data System (ADS)

    Amera Aziz, Laila; Kasim, Abdul Rahman Mohd; Zuki Salleh, Mohd; Syahidah Yusoff, Nur; Shafie, Sharidan

    2017-09-01

    The main interest of this study is to investigate the effect of MHD on the boundary layer flow and heat transfer of viscoelastic micropolar fluid. Governing equations are transformed into dimensionless form in order to reduce their complexity. Then, the stream function is applied to the dimensionless equations to produce partial differential equations which are then solved numerically using the Keller-box method in Fortran programming. The numerical results are compared to published study to ensure the reliability of present results. The effects of selected physical parameters such as the viscoelastic parameter, K, micropolar parameter, K1 and magnetic parameter, M on the flow and heat transfer are discussed and presented in tabular and graphical form. The findings from this study will be of critical importance in the fields of medicine, chemical as well as industrial processes where magnetic field is involved.

  17. Use of fuzzy sets in modeling of GIS objects

    NASA Astrophysics Data System (ADS)

    Mironova, Yu N.

    2018-05-01

    The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.

  18. Diagnosis of the Ill-condition of the RFM Based on Condition Index and Variance Decomposition Proportion (CIVDP)

    NASA Astrophysics Data System (ADS)

    Qing, Zhou; Weili, Jiao; Tengfei, Long

    2014-03-01

    The Rational Function Model (RFM) is a new generalized sensor model. It does not need the physical parameters of sensors to achieve a high accuracy that is compatible to the rigorous sensor models. At present, the main method to solve RPCs is the Least Squares Estimation. But when coefficients has a large number or the distribution of the control points is not even, the classical least square method loses its superiority due to the ill-conditioning problem of design matrix. Condition Index and Variance Decomposition Proportion (CIVDP) is a reliable method for diagnosing the multicollinearity among the design matrix. It can not only detect the multicollinearity, but also can locate the parameters and show the corresponding columns in the design matrix. In this paper, the CIVDP method is used to diagnose the ill-condition problem of the RFM and to find the multicollinearity in the normal matrix.

  19. Observations of the structure and vertical transport of the polar upper ionosphere with the EISCAT VHF radar. I - Is EISCAT able to determine O(+) and H(+) polar wind characteristic? A simulation study

    NASA Technical Reports Server (NTRS)

    Blelly, Pierre-Louis; Barakat, Abdullah R.; Fontanari, Jean; Alcayde, Denis; Blanc, Michel; Wu, Jian; Lathuillere, C.

    1992-01-01

    A method presented by Wu et al. (1992) for computing the H(+) vertical velocity from the main ionospheric parameters measured by the EISCAT VHF radar is tested in a fully controlled sequence which consists of generating an ideal ionospheric model by solving the coupled continuity and momentum equations for a two-ion plasma (O(+) and H(+)). Synthetic autocorrelation functions are generated from this model with the radar characteristics and used as actual measurements to compute the H(+) vertical velocities. Results of these simulations are shown and discussed for three cases of typical and low SNR and for low and increased mixing ratios. In most cases general agreement is found between computed H(+) velocities and generic ones with the altitude range considered, i.e., 200-1000 km. The method is shown to be reliable.

  20. Designing perturbative metamaterials from discrete models.

    PubMed

    Matlack, Kathryn H; Serra-Garcia, Marc; Palermo, Antonio; Huber, Sebastian D; Daraio, Chiara

    2018-04-01

    Identifying material geometries that lead to metamaterials with desired functionalities presents a challenge for the field. Discrete, or reduced-order, models provide a concise description of complex phenomena, such as negative refraction, or topological surface states; therefore, the combination of geometric building blocks to replicate discrete models presenting the desired features represents a promising approach. However, there is no reliable way to solve such an inverse problem. Here, we introduce 'perturbative metamaterials', a class of metamaterials consisting of weakly interacting unit cells. The weak interaction allows us to associate each element of the discrete model with individual geometric features of the metamaterial, thereby enabling a systematic design process. We demonstrate our approach by designing two-dimensional elastic metamaterials that realize Veselago lenses, zero-dispersion bands and topological surface phonons. While our selected examples are within the mechanical domain, the same design principle can be applied to acoustic, thermal and photonic metamaterials composed of weakly interacting unit cells.

  1. Creativity from Constraints: What Can We Learn from Motherwell? From Modrian? From Klee?

    ERIC Educational Resources Information Center

    Stokes, Patricia D.

    2008-01-01

    This article presents a problem-solving model of variability and creativity built on the classic Reitman and Simon analyses of musical composition and architectural design. The model focuses on paired constraints: one precluding (or limiting search among) reliable, existing solutions, the other promoting (or directing search to) novel, often…

  2. MPNACK: an optical switching scheme enabling the buffer-less reliable transmission

    NASA Astrophysics Data System (ADS)

    Yu, Xiaoshan; Gu, Huaxi; Wang, Kun; Xu, Meng; Guo, Yantao

    2016-01-01

    Optical data center networks are becoming an increasingly promising solution to solve the bottlenecks faced by electrical networks, such as low transmission bandwidth, high wiring complexity, and unaffordable power consumption. However, the optical circuit switching (OCS) network is not flexible enough to carry the traffic burst while the optical packet switching (OPS) network cannot solve the packet contention in an efficient way. To this end, an improved switching strategy named OPS with multi-hop Negative Acknowledgement (MPNACK) is proposed. This scheme uses a feedback mechanism, rather than the buffering structure, to handle the optical packet contention. The collided packet is treated as a NACK packet and sent back to the source server. When the sender receives this NACK packet, it knows a collision happens in the transmission path and a retransmission procedure is triggered. Overall, the OPS-NACK scheme enables a reliable transmission in the buffer-less optical network. Furthermore, with this scheme, the expensive and energy-hungry elements, optical or electrical buffers, can be removed from the optical interconnects, thus a more scalable and cost-efficient network can be constructed for cloud computing data centers.

  3. Supporting students' learning in the domain of computer science

    NASA Astrophysics Data System (ADS)

    Gasparinatou, Alexandra; Grigoriadou, Maria

    2011-03-01

    Previous studies have shown that students with low knowledge understand and learn better from more cohesive texts, whereas high-knowledge students have been shown to learn better from texts of lower cohesion. This study examines whether high-knowledge readers in computer science benefit from a text of low cohesion. Undergraduate students (n = 65) read one of four versions of a text concerning Local Network Topologies, orthogonally varying local and global cohesion. Participants' comprehension was examined through free-recall measure, text-based, bridging-inference, elaborative-inference, problem-solving questions and a sorting task. The results indicated that high-knowledge readers benefited from the low-cohesion text. The interaction of text cohesion and knowledge was reliable for the sorting activity, for elaborative-inference and for problem-solving questions. Although high-knowledge readers performed better in text-based and in bridging-inference questions with the low-cohesion text, the interaction of text cohesion and knowledge was not reliable. The results suggest a more complex view of when and for whom textual cohesion affects comprehension and consequently learning in computer science.

  4. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  5. Do problem-solving interventions improve psychosocial outcomes in vision impaired adults: a systematic review and meta-analysis.

    PubMed

    Holloway, Edith E; Xie, Jing; Sturrock, Bonnie A; Lamoureux, Ecosse L; Rees, Gwyneth

    2015-05-01

    To evaluate the effectiveness of problem-solving interventions on psychosocial outcomes in vision impaired adults. A systematic search of randomised controlled trials (RCTs), published between 1990 and 2013, that investigated the impact of problem-solving interventions on depressive symptoms, emotional distress, quality of life (QoL) and functioning was conducted. Two reviewers independently selected and appraised study quality. Data permitting, intervention effects were statistically pooled and meta-analyses were performed, otherwise summarised descriptively. Eleven studies (reporting on eight trials) met inclusion criteria. Pooled analysis showed problem-solving interventions improved vision-related functioning (standardised mean change [SMC]: 0.15; 95% CI: 0.04-0.27) and emotional distress (SMC: -0.36; 95% CI: -0.54 to -0.19). There was no evidence to support improvements in depressive symptoms (SMC: -0.27, 95% CI: -0.66 to 0.12) and insufficient evidence to determine the effectiveness of problem-solving interventions on QoL. The small number of well-designed studies and narrow inclusion criteria limit the conclusions drawn from this review. However, problem-solving skills may be important for nurturing daily functioning and reducing emotional distress for adults with vision impairment. Given the empirical support for the importance of effective problem-solving skills in managing chronic illness, more well-designed RCTs are needed with diverse vision impaired samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. The Information Function for the One-Parameter Logistic Model: Is it Reliability?

    ERIC Educational Resources Information Center

    Doran, Harold C.

    2005-01-01

    The information function is an important statistic in item response theory (IRT) applications. Although the information function is often described as the IRT version of reliability, it differs from the classical notion of reliability from a critical perspective: replication. This article first explores the information function for the…

  7. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  8. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  9. Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    PubMed Central

    Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong

    2011-01-01

    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285

  10. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  11. The effect of Missouri mathematics project learning model on students’ mathematical problem solving ability

    NASA Astrophysics Data System (ADS)

    Handayani, I.; Januar, R. L.; Purwanto, S. E.

    2018-01-01

    This research aims to know the influence of Missouri Mathematics Project Learning Model to Mathematical Problem-solving Ability of Students at Junior High School. This research is a quantitative research and uses experimental research method of Quasi Experimental Design. The research population includes all student of grade VII of Junior High School who are enrolled in the even semester of the academic year 2016/2017. The Sample studied are 76 students from experimental and control groups. The sampling technique being used is cluster sampling method. The instrument is consisted of 7 essay questions whose validity, reliability, difficulty level and discriminating power have been tested. Before analyzing the data by using t-test, the data has fulfilled the requirement for normality and homogeneity. The result of data shows that there is the influence of Missouri mathematics project learning model to mathematical problem-solving ability of students at junior high school with medium effect.

  12. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    DOE PAGES

    Guthrie, Michael A.

    2013-01-01

    limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less

  13. Method for solving the problem of nonlinear heating a cylindrical body with unknown initial temperature

    NASA Astrophysics Data System (ADS)

    Yaparova, N.

    2017-10-01

    We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.

  14. Analytical study of the liquid phase transient behavior of a high temperature heat pipe. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Roche, Gregory Lawrence

    1988-01-01

    The transient operation of the liquid phase of a high temperature heat pipe is studied. The study was conducted in support of advanced heat pipe applications that require reliable transport of high temperature drops and significant distances under a broad spectrum of operating conditions. The heat pipe configuration studied consists of a sealed cylindrical enclosure containing a capillary wick structure and sodium working fluid. The wick is an annular flow channel configuration formed between the enclosure interior wall and a concentric cylindrical tube of fine pore screen. The study approach is analytical through the solution of the governing equations. The energy equation is solved over the pipe wall and liquid region using the finite difference Peaceman-Rachford alternating direction implicit numerical method. The continuity and momentum equations are solved over the liquid region by the integral method. The energy equation and liquid dynamics equation are tightly coupled due to the phase change process at the liquid-vapor interface. A kinetic theory model is used to define the phase change process in terms of the temperature jump between the liquid-vapor surface and the bulk vapor. Extensive auxiliary relations, including sodium properties as functions of temperature, are used to close the analytical system. The solution procedure is implemented in a FORTRAN algorithm with some optimization features to take advantage of the IBM System/370 Model 3090 vectorization facility. The code was intended for coupling to a vapor phase algorithm so that the entire heat pipe problem could be solved. As a test of code capabilities, the vapor phase was approximated in a simple manner.

  15. Exploring how students think: a new method combining think-aloud and concept mapping protocols.

    PubMed

    Pottier, Pierre; Hardouin, Jean-Benoit; Hodges, Brian D; Pistorius, Marc-Antoine; Connault, Jérome; Durant, Cécile; Clairand, Renaud; Sebille, Véronique; Barrier, Jacques-Henri; Planchon, Bernard

    2010-09-01

    A key element of medical competence is problem solving. Previous work has shown that doctors use inductive reasoning to progress from facts to hypotheses and deductive reasoning to move from hypotheses to the gathering of confirmatory information. No individual assessment method has been designed to quantify the use of inductive and deductive procedures within clinical reasoning. The aim of this study was to explore the feasibility and reliability of a new method which allows for the rapid identification of the style (inductive or deductive) of clinical reasoning in medical students and experts. The study included four groups of four participants. These comprised groups of medical students in Years 3, 4 and 5 and a group of specialists in internal medicine, all at a medical school with a 6-year curriculum in France. Participants were asked to solve four clinical problems by thinking aloud. The thinking expressed aloud was immediately transcribed into concept maps by one or two 'writers' trained to distinguish inductive and deductive links. Reliability was assessed by estimating the inter-writer correlation. The calculated rate of inductive reasoning, the richness score and the rate of exhaustiveness of reasoning were compared according to the level of expertise of the individual and the type of clinical problem. The total number of maps drawn amounted to 32 for students in Year 4, 32 for students in Year 5, 16 for students in Year 3 and 16 for experts. A positive correlation was found between writers (R = 0.66-0.93). Richness scores and rates of exhaustiveness of reasoning did not differ according to expertise level. The rate of inductive reasoning varied as expected according to the nature of the clinical problem and was lower in experts (41% versus 67%). This new method showed good reliability and may be a promising tool for the assessment of medical problem-solving skills, giving teachers a means of diagnosing how their students think when they are confronted with clinical problems.

  16. The use of Galerkin finite-element methods to solve mass-transport equations

    USGS Publications Warehouse

    Grove, David B.

    1977-01-01

    The partial differential equation that describes the transport and reaction of chemical solutes in porous media was solved using the Galerkin finite-element technique. These finite elements were superimposed over finite-difference cells used to solve the flow equation. Both convection and flow due to hydraulic dispersion were considered. Linear and Hermite cubic approximations (basis functions) provided satisfactory results: however, the linear functions were computationally more efficient for two-dimensional problems. Successive over relaxation (SOR) and iteration techniques using Tchebyschef polynomials were used to solve the sparce matrices generated using the linear and Hermite cubic functions, respectively. Comparisons of the finite-element methods to the finite-difference methods, and to analytical results, indicated that a high degree of accuracy may be obtained using the method outlined. The technique was applied to a field problem involving an aquifer contaminated with chloride, tritium, and strontium-90. (Woodard-USGS)

  17. Problem-solving style and adaptation in breast cancer survivors: a prospective analysis.

    PubMed

    Heppner, P Paul; Armer, Jane M; Mallinckrodt, Brent

    2009-06-01

    Emotional care of the breast cancer patient is not well understood; this lack of understanding results in both a high cost to the patient, as well as the health care system. This study examined the role of problem-solving style as a predictor of emotional distress, adjustment to breast cancer, and physical function immediately post-surgery and 12 months later. The sample consisted of 121 women diagnosed with breast cancer and undergoing surgery as a primary treatment. The survivors completed a measure of problem-solving style and three outcome measures immediately post-surgery, as well as at 1 year later. There was a 95.6% retention rate at 1 year. Multiple hierarchical regressions revealed, after controlling for patient demographics and stage of cancer, that problem-solving style (particularly personal control) was associated with emotional distress, adjustment to chronic illness, and physical function immediately following surgical intervention. In addition, a more positive problem-solving style was associated with less emotional distress, but not a better adaptation to a chronic illness or physical functioning 12 months later; the Personal Control again was the best single predictor of the emotional distress, adding 10% of the variance in predicting this outcome. The utility of post-surgery assessment may help identify those in need for problem-solving training to improve these outcomes at 1 year. Future studies need to determine the impact of interventions tailored to levels of problem-solving styles in cancer survivors over time. Understanding the role of problem solving style in breast cancer survivors deserves attention as it is associated with emotional distress immediately and one year after medical intervention. Problem-solving style should be evaluated early, and interventions established for those most at risk for emotional distress.

  18. Pre-University Tuition in Science and Technology Can Influence Executive Functions

    ERIC Educational Resources Information Center

    Méndez, Marta; Arias, Natalia; Menéndez, José R.; Villar, José R.; Neira, Ángel; Romano, Pedro V.; Núñez, José Carlos; Arias, Jorge L.

    2014-01-01

    Introduction: Scientific and technological areas include tuition based on highly visuo-spatial specialization and problem solving. Spatial skills and problem solving are embedded in a curriculum that promotes understanding of Science and technical subjects. These abilities are related to the development of executive functions (EFs). We aim to…

  19. Toward reliable characterization of functional homogeneity in the human brain: Preprocessing, scan duration, imaging resolution and computational space

    PubMed Central

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.

    2013-01-01

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  20. Impaired memory for material related to a problem solved prior to encoding: suppression at learning or interference at recall?

    PubMed

    Kowalczyk, Marek

    2017-07-01

    Earlier research by the author revealed that material encoded incidentally in a speeded affective classification task and related to the demands of a divergent problem tends to be recalled worse in participants who solved the problem prior to encoding than in participants in the control, no-problem condition. The aim of the present experiment was to replicate this effect with a new, size-comparison orienting task, and to test for possible mechanisms of impaired recall. Participants either solved a problem before the orienting task or not, and classified each item in this task either once or three times. There was a reliable effect of impaired recall of problem-related items in the repetition condition, but not in the no-repetition condition. Solving the problem did not influence repetition priming for these items. These results support an account that attributes the impaired recall to inhibitory processes at learning and speak against a proactive interference explanation. However, they can be also accommodated by an account that refers to inefficient context cues and competitor interference at retrieval.

  1. Broad-band Lg Attenuation Tomography in Eastern Eurasia and The Resolution, Uncertainty and Data Predication

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Xu, X.

    2017-12-01

    The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.

  2. Temporal reliability and lateralization of the resting-state language network.

    PubMed

    Zhu, Linlin; Fan, Yang; Zou, Qihong; Wang, Jue; Gao, Jia-Hong; Niu, Zhendong

    2014-01-01

    The neural processing loop of language is complex but highly associated with Broca's and Wernicke's areas. The left dominance of these two areas was the earliest observation of brain asymmetry. It was demonstrated that the language network and its functional asymmetry during resting state were reproducible across institutions. However, the temporal reliability of resting-state language network and its functional asymmetry are still short of knowledge. In this study, we established a seed-based resting-state functional connectivity analysis of language network with seed regions located at Broca's and Wernicke's areas, and investigated temporal reliability of language network and its functional asymmetry. The language network was found to be temporally reliable in both short- and long-term. In the aspect of functional asymmetry, the Broca's area was found to be left lateralized, while the Wernicke's area is mainly right lateralized. Functional asymmetry of these two areas revealed high short- and long-term reliability as well. In addition, the impact of global signal regression (GSR) on reliability of the resting-state language network was investigated, and our results demonstrated that GSR had negligible effect on the temporal reliability of the resting-state language network. Our study provided methodology basis for future cross-culture and clinical researches of resting-state language network and suggested priority of adopting seed-based functional connectivity for its high reliability.

  3. Temporal Reliability and Lateralization of the Resting-State Language Network

    PubMed Central

    Zou, Qihong; Wang, Jue; Gao, Jia-Hong; Niu, Zhendong

    2014-01-01

    The neural processing loop of language is complex but highly associated with Broca's and Wernicke's areas. The left dominance of these two areas was the earliest observation of brain asymmetry. It was demonstrated that the language network and its functional asymmetry during resting state were reproducible across institutions. However, the temporal reliability of resting-state language network and its functional asymmetry are still short of knowledge. In this study, we established a seed-based resting-state functional connectivity analysis of language network with seed regions located at Broca's and Wernicke's areas, and investigated temporal reliability of language network and its functional asymmetry. The language network was found to be temporally reliable in both short- and long-term. In the aspect of functional asymmetry, the Broca's area was found to be left lateralized, while the Wernicke's area is mainly right lateralized. Functional asymmetry of these two areas revealed high short- and long-term reliability as well. In addition, the impact of global signal regression (GSR) on reliability of the resting-state language network was investigated, and our results demonstrated that GSR had negligible effect on the temporal reliability of the resting-state language network. Our study provided methodology basis for future cross-culture and clinical researches of resting-state language network and suggested priority of adopting seed-based functional connectivity for its high reliability. PMID:24475058

  4. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    PubMed

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.

  6. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842

  7. Design optimization under uncertainty and speed variability for a piezoelectric energy harvester powering a tire pressure monitoring sensor

    NASA Astrophysics Data System (ADS)

    Toghi Eshghi, Amin; Lee, Soobum; Kazem Sadoughi, Mohammad; Hu, Chao; Kim, Young-Cheol; Seo, Jong-Ho

    2017-10-01

    Energy harvesting (EH) technologies to power small sized electronic devices are attracting great attention. Wasted energy in a vehicle’s rotating tire has a great potential to enable self-powered tire pressure monitoring sensors (TPMS). Piezoelectric type energy harvesters can be used to collect vibrational energy and power such systems. Due to the presence of harsh acceleration in a rotating tire, a design tradeoff needs to be studied to prolong the harvester’s fatigue life as well as to ensure sufficient power generation. However, the design by traditional deterministic design optimization (DDO) does not show reliable performance due to the lack of consideration of various uncertainty factors (e.g., manufacturing tolerances, material properties, and loading conditions). In this study, we address a new EH design formulation that considers the uncertainty in car speed, dimensional tolerances and material properties, and solve this design problem using reliability-based design optimization (RBDO). The RBDO problem is formulated to maximize compactness and minimize weight of a TPMS harvester while satisfying power and durability requirements. A transient analysis has been done to measure the time varying response of EH such as power generation, dynamic strain, and stress. A conservative design formulation is proposed to consider the expected power from varied speed and stress at higher speed. When compared to the DDO, the RBDO results show that the reliability of EH is increased significantly by scarifying the objective function. Finally, experimental test has been conducted to demonstrate the merits of RBDO design over DDO.

  8. Working with low back pain: problem-solving orientation and function.

    PubMed

    Shaw, W S; Feuerstein, M; Haufler, A J; Berkowitz, S M; Lopez, M S

    2001-08-01

    A number of ergonomic, workplace and individual psychosocial factors and health behaviors have been associated with the onset, exacerbation and/or maintenance of low back pain (LBP). The functional impact of these factors may be influenced by how a worker approaches problems in general. The present study was conducted to determine whether problem-solving orientation was associated with physical and mental health outcomes in fully employed workers (soldiers) reporting a history of LBP in the past year. The sample consisted of 475 soldiers (446 male, 29 female; mean age 24.5 years) who worked in jobs identified as high risk for LBP-related disability and reported LBP symptoms in the past 12 months. The Social Problem-Solving Inventory and the Standard Form-12 (SF-12) were completed by all subjects. Hierarchical multiple regression analyses were used to predict the SF-12 physical health summary scale from interactions of LBP symptoms with each of five problem-solving subscales. Low scores on positive problem-solving orientation (F(1,457)=4.49), and high scores on impulsivity/carelessness (F(1,457)=9.11) were associated with a steeper gradient in functional loss related to LBP. Among those with a longer history of low-grade LBP, an avoidant approach to problem-solving was also associated with a steeper gradient of functional loss (three-way interaction; F(1,458)=4.58). These results suggest that the prolonged impact of LBP on daily function may be reduced by assisting affected workers to conceptualize LBP as a problem that can be overcome and using strategies that promote taking an active role in reducing risks for LBP. Secondary prevention efforts may be improved by addressing these factors.

  9. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  10. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    NASA Astrophysics Data System (ADS)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  11. Influences on the Test-Retest Reliability of Functional Connectivity MRI and its Relationship with Behavioral Utility.

    PubMed

    Noble, Stephanie; Spann, Marisa N; Tokoglu, Fuyuze; Shen, Xilin; Constable, R Todd; Scheinost, Dustin

    2017-11-01

    Best practices are currently being developed for the acquisition and processing of resting-state magnetic resonance imaging data used to estimate brain functional organization-or "functional connectivity." Standards have been proposed based on test-retest reliability, but open questions remain. These include how amount of data per subject influences whole-brain reliability, the influence of increasing runs versus sessions, the spatial distribution of reliability, the reliability of multivariate methods, and, crucially, how reliability maps onto prediction of behavior. We collected a dataset of 12 extensively sampled individuals (144 min data each across 2 identically configured scanners) to assess test-retest reliability of whole-brain connectivity within the generalizability theory framework. We used Human Connectome Project data to replicate these analyses and relate reliability to behavioral prediction. Overall, the historical 5-min scan produced poor reliability averaged across connections. Increasing the number of sessions was more beneficial than increasing runs. Reliability was lowest for subcortical connections and highest for within-network cortical connections. Multivariate reliability was greater than univariate. Finally, reliability could not be used to improve prediction; these findings are among the first to underscore this distinction for functional connectivity. A comprehensive understanding of test-retest reliability, including its limitations, supports the development of best practices in the field. © The Author 2017. Published by Oxford University Press.

  12. Measuring Problem Solving Skills in "Portal 2"

    ERIC Educational Resources Information Center

    Shute, Valerie J.; Wang, Lubin

    2013-01-01

    This paper examines possible improvement to problem solving skills as a function of playing the video game "Portal 2." Stealth assessment is used in the game to evaluate students' problem solving abilities--specifically basic and flexible rule application. The stealth assessment measures will be validated against commonly accepted…

  13. Estimation of Reliability Coefficients Using the Test Information Function and Its Modifications.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1994-01-01

    The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)

  14. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  15. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  16. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  17. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  18. Cognitive functioning and employment among people with schizophrenia in vocational rehabilitation.

    PubMed

    Lexén, Annika; Hofgren, Caisa; Stenmark, Richard; Bejerholm, Ulrika

    2016-06-16

    Employment is central to recovery in schizophrenia, but little attention has been paid to its relationship with cognitive functioning. This cross-sectional study adds to the knowledge base of relationships between cognitive functioning and gaining competitive employment, work hours per week, and monthly income among people with schizophrenia in vocational rehabilitation. It also examines which area of cognitive function may be decisive for gaining employment. Thirty-nine vocational rehabilitation participants were administered a cognitive battery based on MATRICS Consensus Cognitive Battery. Socio-demographic, clinical, and vocational data were gathered and analyzed with nonparametric statistics. Individuals with competitive employment differed from those without competitive employment in attention and psychomotor speed, delayed verbal recall, immediate visual recall, and planning, reasoning, and problem-solving. Higher scores in immediate and delayed verbal recall and planning, reasoning, and problem-solving correlated with more work hours per week and higher income. Immediate visual recall was related to higher income. Higher scores in planning, reasoning, and problem-solving was an indicator of competitive employment (OR = 1.48). Higher order cognitive functioning of planning, reasoning, and problem-solving may have a central role in gaining employment. The findings should be considered in compensation for or improving cognitive functions for vocational rehabilitation participants.

  19. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  20. Functional fixedness in a technologically sparse culture.

    PubMed

    German, Tim P; Barrett, H Clark

    2005-01-01

    Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object's typical function has been primed. Subjects become "fixed" on the design function of the object, and problem solving suffers relative to control conditions in which the object's function is not demonstrated. In the current study, such functional fixedness was demonstrated in a sample of adolescents (mean age of 16 years) among the Shuar of Ecuadorian Amazonia, whose technologically sparse culture provides limited access to large numbers of artifacts with highly specialized functions. This result suggests that design function may universally be the core property of artifact concepts in human semantic memory.

  1. Unconditional security from noisy quantum storage

    NASA Astrophysics Data System (ADS)

    Wehner, Stephanie

    2010-03-01

    We consider the implementation of two-party cryptographic primitives based on the sole physical assumption that no large-scale reliable quantum storage is available to the cheating party. An important example of such a task is secure identification. Here, Alice wants to identify herself to Bob (possibly an ATM machine) without revealing her password. More generally, Alice and Bob wish to solve problems where Alice holds an input x (e.g. her password), and Bob holds an input y (e.g. the password an honest Alice should possess), and they want to obtain the value of some function f(x,y) (e.g. the equality function). Security means that the legitimate users should not learn anything beyond this specification. That is, Alice should not learn anything about y and Bob should not learn anything about x, other than what they may be able to infer from the value of f(x,y). We show that any such problem can be solved securely in the noisy-storage model by constructing protocols for bit commitment and oblivious transfer, where we prove security against the most general attack. Our protocols can be implemented with present-day hardware used for quantum key distribution. In particular, no quantum storage is required for the honest parties. Our work raises a large number of immediate theoretical as well as experimental questions related to many aspects of quantum information science, such as for example understanding the information carrying properties of quantum channels and memories, randomness extraction, min-entropy sampling, as well as constructing small handheld devices which are suitable for the task of secure identification. [4pt] Full version available at arXiv:0906.1030 (theoretical) and arXiv:0911.2302 (practically oriented).

  2. Pre-Service Teacher Scientific Behavior: Comparative Study of Paired Science Project Assignments

    ERIC Educational Resources Information Center

    Bulunuz, Mizrap; Tapan Broutin, Menekse Seden; Bulunuz, Nermin

    2016-01-01

    Problem Statement: University students usually lack the skills to rigorously define a multi-dimensional real-life problem and its limitations in an explicit, clear and testable way, which prevents them from forming a reliable method, obtaining relevant results and making balanced judgments to solve a problem. Purpose of the Study: The study…

  3. Validation of a Performance Assessment Instrument in Problem-Based Learning Tutorials Using Two Cohorts of Medical Students

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2016-01-01

    Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism.…

  4. Can People Recollect Well and Change Their Source Memory Bias of "Aha!" Experiences?

    ERIC Educational Resources Information Center

    Du, Xiumin; Zhang, Ke; Wang, Jiali; Luo, Junlong; Luo, Jing

    2017-01-01

    Although many scientific discoveries were frequently reported as kinds of insightful breakthrough that suddenly illuminated in one's mind, we can never exactly know whether these afterward reports were reliable or not. In this study, subjects were asked to solve a list of Remote Associate Test problems and got both subsets of the insightfully and…

  5. Complex Langevin method: When can it be trusted?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aarts, Gert; Seiler, Erhard; Stamatescu, Ion-Olimpiu

    2010-03-01

    We analyze to what extent the complex Langevin method, which is in principle capable of solving the so-called sign problems, can be considered as reliable. We give a formal derivation of the correctness and then point out various mathematical loopholes. The detailed study of some simple examples leads to practical suggestions about the application of the method.

  6. An Examination of English Speaking Tests and Research on English Speaking Ability.

    ERIC Educational Resources Information Center

    Nakamura, Yuji

    This paper examines both overseas and domestic tests of English speaking ability from the viewpoint of the crucial testing elements such as definition of speaking ability, validity, reliability, and practicality. The paper points out problems to be solved and proposes suggestions for constructing an oral proficiency test in order to determine the…

  7. Development and Validation of an Instrument for Assessing Attitudes of High School Students about Recycling

    ERIC Educational Resources Information Center

    Ugulu, Ilker

    2015-01-01

    Recycling and its applications are growing significantly due to the great potential for solving a range of environmental problems in society. Nevertheless, there are currently very few instruments that can provide valid and reliable data on students' attitudes toward recycling. In this regard, this article focuses on the development and validation…

  8. ExM:System Support for Extreme-Scale, Many-Task Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S

    The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less

  9. The Evolving Contribution of Mass Spectrometry to Integrative Structural Biology

    NASA Astrophysics Data System (ADS)

    Faini, Marco; Stengel, Florian; Aebersold, Ruedi

    2016-06-01

    Protein complexes are key catalysts and regulators for the majority of cellular processes. Unveiling their assembly and structure is essential to understanding their function and mechanism of action. Although conventional structural techniques such as X-ray crystallography and NMR have solved the structure of important protein complexes, they cannot consistently deal with dynamic and heterogeneous assemblies, limiting their applications to small scale experiments. A novel methodological paradigm, integrative structural biology, aims at overcoming such limitations by combining complementary data sources into a comprehensive structural model. Recent applications have shown that a range of mass spectrometry (MS) techniques are able to generate interaction and spatial restraints (cross-linking MS) information on native complexes or to study the stoichiometry and connectivity of entire assemblies (native MS) rapidly, reliably, and from small amounts of substrate. Although these techniques by themselves do not solve structures, they do provide invaluable structural information and are thus ideally suited to contribute to integrative modeling efforts. The group of Brian Chait has made seminal contributions in the use of mass spectrometric techniques to study protein complexes. In this perspective, we honor the contributions of the Chait group and discuss concepts and milestones of integrative structural biology. We also review recent examples of integration of structural MS techniques with an emphasis on cross-linking MS. We then speculate on future MS applications that would unravel the dynamic nature of protein complexes upon diverse cellular states.

  10. Reliability of a functional test battery evaluating functionality, proprioception, and strength in recreational athletes with functional ankle instability.

    PubMed

    Sekir, U; Yildiz, Y; Hazneci, B; Ors, F; Saka, T; Aydin, T

    2008-12-01

    In contrast to the single evaluation methods used in the past, the combination of multiple tests allows one to obtain a global assessment of the ankle joint. The aim of this study was to determine the reliability of the different tests in a functional test battery. Twenty-four male recreational athletes with unilateral functional ankle instability (FAI) were recruited for this study. One component of the test battery included five different functional ability tests. These tests included a single limb hopping course, single-legged and triple-legged hop for distance, and six and cross six meter hop for time. The ankle joint position sense and one leg standing test were used for evaluation of proprioception and sensorimotor control. The isokinetic strengths of the ankle invertor and evertor muscles were evaluated at a velocity of 120 degrees /s. The reliability of the test battery was assessed by calculating the intraclass correlation coefficient (ICC). Each subject was tested two times, with an interval of 3-5 days between the test sessions. The ICCs for ankle functional and proprioceptive ability showed high reliability (ICCs ranging from 0.94 to 0.98). Additionally, isokinetic ankle joint inversion and eversion strength measurements represented good to high reliability (ICCs between 0.82 and 0.98). The functional test battery investigated in this study proved to be a reliable tool for the assessment of athletes with functional ankle instability. Therefore, clinicians may obtain reliable information from the functional test battery during the assessment of ankle joint performance in patients with functional ankle instability.

  11. Toward reliable characterization of functional homogeneity in the human brain: preprocessing, scan duration, imaging resolution and computational space.

    PubMed

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F Xavier; Milham, Michael P

    2013-01-15

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test-retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo's TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Families Affected by Huntington's Disease Report Difficulties in Communication, Emotional Involvement, and Problem Solving.

    PubMed

    Jona, Celine M H; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C; Andrews, Sophie C

    2017-01-01

    Family functioning in Huntington's disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. The aim of the current study was to assess family functioning in HD families. We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD.

  13. Families Affected by Huntington’s Disease Report Difficulties in Communication, Emotional Involvement, and Problem Solving

    PubMed Central

    Jona, Celine M.H.; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C.; Andrews, Sophie C.

    2017-01-01

    Background: Family functioning in Huntington’s disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. Objective: The aim of the current study was to assess family functioning in HD families. Methods: We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. Results: The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. Conclusion: These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD. PMID:28968240

  14. New convergence results for the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Bonettini, S.; Prato, M.

    2015-09-01

    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.

  15. Development of assessment instruments to measure critical thinking skills

    NASA Astrophysics Data System (ADS)

    Sumarni, W.; Supardi, K. I.; Widiarti, N.

    2018-04-01

    Assessment instruments that is commonly used in the school generally have not been orientated on critical thinking skills. The purpose of this research is to develop assessment instruments to measure critical thinking skills, to test validity, reliability, and practicality. This type of research is Research and Development. There are two stages on the preface step, which are field study and literacy study. On the development steps, there some parts, which are 1) instrument construction, 2) expert validity, 3) limited scale tryout and 4) narrow scale try-out. The developed assessment instrument are analysis essay and problem solving. Instruments were declared valid, reliable and practical.

  16. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  17. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  18. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  19. Wind Turbine Drivetrain Reliability Collaborative Workshop: A Recap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Jonathan; Sheng, Shuangwen; Cotrell, Jason

    The Wind Turbine Drivetrain Reliability Collaborative Workshop was convened by the National Renewable Energy Laboratory (NREL), Argonne National Laboratory, and the U.S. Department of Energy to explore the state of the art in wind turbine drivetrain mechanical system reliability as well as research and development (R&D) challenges that if solved could have significant benefits. The workshop was held at the Research Support Facility on NREL's main campus in Golden, Colorado, from February 16-17, 2016. More than 120 attendees participated from industry, academia, and government. Plenary presentations covered wind turbine drivetrain design, testing, and analysis; tribology -- the science and engineeringmore » of interacting surfaces in relative motion -- and failure modes; and condition monitoring and data analytics. In addition to the presentations, workshops were held in each of these areas to discuss R&D challenges. This report serves as a summary of the presentations, workshops, and conclusions on R&D challenges in wind turbine drivetrain reliability.« less

  20. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment

    PubMed Central

    Seo, Aria; Kim, Yeichang

    2017-01-01

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709

  1. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.

    PubMed

    Seo, Aria; Jeong, Junho; Kim, Yeichang

    2017-08-13

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.

  2. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach.

    PubMed

    Wissmann, F; Reginatto, M; Möller, T

    2010-09-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  3. Solving bezel reliability and CRT obsolescence

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard J.; Bowen, Arlen R.; Knowles, Terry

    2003-09-01

    Scientific Research Corporation designed a Smart Multi-Function Color Display with Positive Pilot Feedback under the funding of an U. S. Navy Small Business Innovative Research program. The Smart Multi-Function Color Display can replace the obsolete monochrome Cathode Ray Tube display currently on the T-45C aircraft built by Boeing. The design utilizes a flat panel color Active Matrix Liquid Crystal Display and TexZec's patented Touch Thru Metal bezel technology providing both visual and biomechanical feedback to the pilot in a form, fit, and function replacement to the current T-45C display. Use of an existing color AMLCD, requires the least adaptation to fill the requirements of this application, thereby minimizing risk associated with developing a new display technology and maximizing the investment in improved user interface technology. The improved user interface uses TexZec's Touch Thru Metal technology to eliminate all of the moving parts that traditionally have limited Mean-Time-Between-Failure. The touch detection circuit consists of Commercial-Off-The-Shelf components, creating touch detection circuitry, which is simple and durable. This technology provides robust switch activation and a high level of environmental immunity, both mechanical and electrical. Replacement of all the T-45C multi-function displays with this design will improve the Mean-Time-Between-Failure and drastically reduce display life cycle costs. The design methodology described in this paper can be adapted to any new or replacement display.

  4. Human reliability assessment: tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas G.; Overlin, Trudy K.

    1997-01-01

    This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

  5. Modern problems concerned with ensuring safe operation of heat-generating and mechanical equipment in extending its lifetime

    NASA Astrophysics Data System (ADS)

    Rezinskikh, V. F.; Grin', E. A.

    2013-01-01

    The problem concerned with safe and reliable operation of ageing heat-generating and mechanical equipment of thermal power stations is discussed. It is pointed out that the set of relevant regulatory documents serves as the basis for establishing an efficient equipment diagnostic system. In this connection, updating the existing regulatory documents with imparting the required status to them is one of top-priority tasks. Carrying out goal-oriented scientific research works is a necessary condition for solving this problem as well as other questions considered in the paper that are important for ensuring reliable performance of equipment operating for a long period of time. In recent years, the amount of such works has dropped dramatically, although the need for them is steadily growing. Unbiased assessment of the technical state of equipment that has been in operation for a long period of time is an important aspect in solving the problem of ensuring reliable and safe operation of thermal power stations. Here, along with the quality of diagnostic activities, monitoring of technical state performed on the basis of an analysis of statistical field data and results of operational checks plays an important role. The need to concentrate efforts taken in the mentioned problem areas is pointed out, and it is indicated that successful implementation of the outlined measures requires proper organization and efficient operation of a system for managing safety in the electric power industry.

  6. The relationship between family functioning and the crime types in incarcerated children.

    PubMed

    Teker, Kamil; Topçu, Seda; Başkan, Sevgi; Orhon, Filiz Ş; Ulukol, Betül

    2017-06-01

    We investigated the relationship between the family functioning and crime types in incarcerated children. One hundred eighty two incarcerated children aged between 13-18 years who were confined in child-youth prisons and child correctional facilities were enrolled into this descriptive study. Participants completed demographic questions and the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) (FAD) with face to face interviews. The crime types were theft, assault (bodily injury), robbery, sexual assault, drug trafficker and murder. The socio-demographic characteristics were compared by using FAD scale, and growing up in a nuclear family had statistically significant better scores for problem solving and communication subscales and the children whose parents had their own house had significantly better problem solving scores When we compared the crime types of children by using problem solving, communication and general functioning subscales of FAD, we found statistical lower scores in assault (bodily injury) group than in theft, sexual assault, murder groups and in drug trafficker group than in murder group, also we found lower scores in drug trafficker group than in theft group for problem solving and general functioning sub-scales, also there were lower scores in bodily injury assault group than in robbery, theft groups and in drug trafficker than in theft group for problem solving subscale. The communication and problem solving sub-scales of FAD are firstly impaired scales for the incarcerated children. We mention these sub-scales are found with unplanned and less serious crimes and commented those as cry for help of the children.

  7. Age differences in everyday problem-solving effectiveness: older adults select more effective strategies for interpersonal problems.

    PubMed

    Blanchard-Fields, Fredda; Mienaltowski, Andrew; Seay, Renee Baldi

    2007-01-01

    Using the Everyday Problem Solving Inventory of Cornelius and Caspi, we examined differences in problem-solving strategy endorsement and effectiveness in two domains of everyday functioning (instrumental or interpersonal, and a mixture of the two domains) and for four strategies (avoidance-denial, passive dependence, planful problem solving, and cognitive analysis). Consistent with past research, our research showed that older adults were more problem focused than young adults in their approach to solving instrumental problems, whereas older adults selected more avoidant-denial strategies than young adults when solving interpersonal problems. Overall, older adults were also more effective than young adults when solving everyday problems, in particular for interpersonal problems.

  8. Test-retest and interrater reliability of the functional lower extremity evaluation.

    PubMed

    Haitz, Karyn; Shultz, Rebecca; Hodgins, Melissa; Matheson, Gordon O

    2014-12-01

    Repeated-measures clinical measurement reliability study. To establish the reliability and face validity of the Functional Lower Extremity Evaluation (FLEE). The FLEE is a 45-minute battery of 8 standardized functional performance tests that measures 3 components of lower extremity function: control, power, and endurance. The reliability and normative values for the FLEE in healthy athletes are unknown. A face validity survey for the FLEE was sent to sports medicine personnel to evaluate the level of importance and frequency of clinical usage of each test included in the FLEE. The FLEE was then administered and rated for 40 uninjured athletes. To assess test-retest reliability, each athlete was tested twice, 1 week apart, by the same rater. To assess interrater reliability, 3 raters scored each athlete during 1 of the testing sessions. Intraclass correlation coefficients were used to assess the test-retest and interrater reliability of each of the FLEE tests. In the face validity survey, the FLEE tests were rated as highly important by 58% to 71% of respondents but frequently used by only 26% to 45% of respondents. Interrater reliability intraclass correlation coefficients ranged from 0.83 to 1.00, and test-retest reliability ranged from 0.71 to 0.95. The FLEE tests are considered clinically important for assessing lower extremity function by sports medicine personnel but are underused. The FLEE also is a reliable assessment tool. Future studies are required to determine if use of the FLEE to make return-to-play decisions may reduce reinjury rates.

  9. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  10. Parental Guidance and Children's Executive Function: Working Memory and Planning as Moderators during Joint Problem-Solving

    ERIC Educational Resources Information Center

    Eason, Sarah H.; Ramani, Geetha B.

    2017-01-01

    Cognitive aspects of children's executive function (EF) were examined as moderators of the effectiveness of parental guidance on children's learning. Thirty-two 5-year-old children and their parents were observed during joint problem-solving. Forms of guidance geared towards cognitive assistance were coded as directive or elaborative, and…

  11. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    PubMed

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  12. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    PubMed Central

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  13. Neural pathway in the right hemisphere underlies verbal insight problem solving.

    PubMed

    Zhao, Q; Zhou, Z; Xu, H; Fan, W; Han, L

    2014-01-03

    Verbal insight problem solving means to break mental sets, to select the novel semantic information and to form novel, task-related associations. Although previous studies have identified the brain regions associated with these key processes, the interaction among these regions during insight is still unclear. In the present study, we explored the functional connectivity between the key regions during solving Chinese 'chengyu' riddles by using event-related functional magnetic resonance imaging. Results showed that both insight and noninsight solutions activated the bilateral inferior frontal gyri, middle temporal gyri and hippocampi, and these regions constituted a frontal to temporal to hippocampal neural pathway. Compared with noninsight solution, insight solution had a stronger functional connectivity between the inferior frontal gyrus and middle temporal gyrus in the right hemisphere. Our study reveals the neural pathway of information processing during verbal insight problem solving, and supports the right-hemisphere advantage theory of insight. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. The Communication Function Classification System: cultural adaptation, validity, and reliability of the Farsi version for patients with cerebral palsy.

    PubMed

    Soleymani, Zahra; Joveini, Ghodsiye; Baghestani, Ahmad Reza

    2015-03-01

    This study developed a Farsi language Communication Function Classification System and then tested its reliability and validity. Communication Function Classification System is designed to classify the communication functions of individuals with cerebral palsy. Up until now, there has been no instrument for assessment of this communication function in Iran. The English Communication Function Classification System was translated into Farsi and cross-culturally modified by a panel of experts. Professionals and parents then assessed the content validity of the modified version. A backtranslation of the Farsi version was confirmed by the developer of the English Communication Function Classification System. Face validity was assessed by therapists and parents of 10 patients. The Farsi Communication Function Classification System was administered to 152 individuals with cerebral palsy (age, 2 to 18 years; median age, 10 years; mean age, 9.9 years; standard deviation, 4.3 years). Inter-rater reliability was analyzed between parents, occupational therapists, and speech and language pathologists. The test-retest reliability was assessed for 75 patients with a 14 day interval between tests. The inter-rater reliability of the Communication Function Classification System was 0.81 between speech and language pathologists and occupational therapists, 0.74 between parents and occupational therapists, and 0.88 between parents and speech and language pathologists. The test-retest reliability was 0.96 for occupational therapists, 0.98 for speech and language pathologists, and 0.94 for parents. The findings suggest that the Farsi version of Communication Function Classification System is a reliable and valid measure that can be used in clinical settings to assess communication function in patients with cerebral palsy. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. The Effects of Labels on Learning Subgoals for Solving Problems.

    ERIC Educational Resources Information Center

    Catrambone, Richard

    This study, involving 65 undergraduates at the Georgia Institute of Technology (Atlanta); explores a scheme for representing problem-solving knowledge and predicting transfer as a function of problem-solving subgoals acquired from examples. A subgoal is an unknown entity (numerical or conceptual) that needs to be found in order to achieve a higher…

  16. Backtrack Programming: A Computer-Based Approach to Group Problem Solving.

    ERIC Educational Resources Information Center

    Scott, Michael D.; Bodaken, Edward M.

    Backtrack problem-solving appears to be a viable alternative to current problem-solving methodologies. It appears to have considerable heuristic potential as a conceptual and operational framework for small group communication research, as well as functional utility for the student group in the small group class or the management team in the…

  17. Spontaneous gestures influence strategy choices in problem solving.

    PubMed

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  18. Test-retest reliability of functional connectivity networks during naturalistic fMRI paradigms.

    PubMed

    Wang, Jiahui; Ren, Yudan; Hu, Xintao; Nguyen, Vinh Thai; Guo, Lei; Han, Junwei; Guo, Christine Cong

    2017-04-01

    Functional connectivity analysis has become a powerful tool for probing the human brain function and its breakdown in neuropsychiatry disorders. So far, most studies adopted resting-state paradigm to examine functional connectivity networks in the brain, thanks to its low demand and high tolerance that are essential for clinical studies. However, the test-retest reliability of resting-state connectivity measures is moderate, potentially due to its low behavioral constraint. On the other hand, naturalistic neuroimaging paradigms, an emerging approach for cognitive neuroscience with high ecological validity, could potentially improve the reliability of functional connectivity measures. To test this hypothesis, we characterized the test-retest reliability of functional connectivity measures during a natural viewing condition, and benchmarked it against resting-state connectivity measures acquired within the same functional magnetic resonance imaging (fMRI) session. We found that the reliability of connectivity and graph theoretical measures of brain networks is significantly improved during natural viewing conditions over resting-state conditions, with an average increase of almost 50% across various connectivity measures. Not only sensory networks for audio-visual processing become more reliable, higher order brain networks, such as default mode and attention networks, but also appear to show higher reliability during natural viewing. Our results support the use of natural viewing paradigms in estimating functional connectivity of brain networks, and have important implications for clinical application of fMRI. Hum Brain Mapp 38:2226-2241, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. 75 FR 50853 - Special Conditions: Cirrus Design Corporation Model SF50 Airplane; Function and Reliability Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... airplanes. 1. Function and Reliability Testing. Flight tests: In place of 14 CFR 21.35(b)(2), the following...; Function and Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special... to CAR part 3 became effective January 15, 1951, and deleted the service test requirements in Section...

  20. Bayesian Estimation of Reliability Burr Type XII Under Al-Bayyatis’ Suggest Loss Function with Numerical Solution

    NASA Astrophysics Data System (ADS)

    Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.

    2018-05-01

    In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.

  1. Assessing ethical problem solving by reasoning rather than decision making.

    PubMed

    Tsai, Tsuen-Chiuan; Harasym, Peter H; Coderre, Sylvain; McLaughlin, Kevin; Donnon, Tyrone

    2009-12-01

    The assessment of ethical problem solving in medicine has been controversial and challenging. The purposes of this study were: (i) to create a new instrument to measure doctors' decisions on and reasoning approach towards resolving ethical problems; (ii) to evaluate the scores generated by the new instrument for their reliability and validity, and (iii) to compare doctors' ethical reasoning abilities between countries and among medical students, residents and experts. This study used 15 clinical vignettes and the think-aloud method to identify the processes and components involved in ethical problem solving. Subjects included volunteer ethics experts, postgraduate Year 2 residents and pre-clerkship medical students. The interview data were coded using the instruments of the decision score and Ethical Reasoning Inventory (ERI). The ERI assessed the quality of ethical reasoning for a particular case (Part I) and for an individual globally across all the vignettes (Part II). There were 17 Canadian and 32 Taiwanese subjects. Based on the Canadian standard, the decision scores between Taiwanese and Canadian subjects differed significantly, but made no discrimination among the three levels of expertise. Scores on the ERI Parts I and II, which reflect doctors' reasoning quality, differed between countries and among different levels of expertise in Taiwan, providing evidence of construct validity. In addition, experts had a greater organised knowledge structure and considered more relevant variables in the process of arriving at ethical decisions than did residents or students. The reliability of ERI scores was 0.70-0.99 on Part I and 0.75-0.80 on Part II. Expertise in solving ethical problems could not be differentiated by the decisions made, but could be differentiated according to the reasoning used to make those decisions. The difference between Taiwanese and Canadian experts suggests that cultural considerations come into play in the decisions that are made in the course of providing humane care to patients.

  2. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    NASA Astrophysics Data System (ADS)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  3. Developing Achievement Test: A Research for Assessment of 5th Grade Biology Subject

    ERIC Educational Resources Information Center

    Sener, Nilay; Tas, Erol

    2017-01-01

    The purpose of this study is to prepare a multiple-choice achievement test with high reliability and validity for the "Let's Solve the Puzzle of Our Body" unit. For this purpose, a multiple choice achievement test consisting of 46 items was applied to 178 fifth grade students in total. As a result of the test and material analysis…

  4. Technical Training Requirements of Middle Management in the Greek Textile and Clothing Industries.

    ERIC Educational Resources Information Center

    Fotinopoulou, K.; Manolopoulos, N.

    A case study of 16 companies in the Greek textile and clothing industry elicited the training needs of the industry's middle managers. The study concentrated on large and medium-sized work units, using a lengthy questionnaire. The study found that middle managers increasingly need to solve problems and ensure the reliability of new equipment and…

  5. A Multi-Peer Assessment Platform for Programming Language Learning: Considering Group Non-Consensus and Personal Radicalness

    ERIC Educational Resources Information Center

    Wang, Yanqing; Liang, Yaowen; Liu, Luning; Liu, Ying

    2016-01-01

    Multi-peer assessment has often been used by teachers to reduce personal bias and make the assessment more reliable. This study reviews the design and development of multi-peer assessment systems that detect and solve two common issues in such systems: non-consensus among group members and personal radicalness in some assessments. A multi-peer…

  6. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  7. Functional Techniques for Data Analysis

    NASA Technical Reports Server (NTRS)

    Tomlinson, John R.

    1997-01-01

    This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.

  8. Comparison of penalty functions on a penalty approach to mixed-integer optimization

    NASA Astrophysics Data System (ADS)

    Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2016-06-01

    In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.

  9. Reliability of a Test Battery Designed for Quickly and Safely Assessing Diverse Indices of Neuromuscular Function

    NASA Technical Reports Server (NTRS)

    Spiering, Barry A.; Lee, Stuart M. C.; Mulavara, Ajitkumar P.; Bentley, Jason, R.; Buxton, Roxanne E.; Lawrence, Emily L.; Sinka, Joseph; Guilliams, Mark E.; Ploutz-Snyder, Lori L.; Bloomberg, Jacob J.

    2010-01-01

    Spaceflight affects nearly every physiological system. Spaceflight-induced alterations in physiological function translate to decrements in functional performance. Purpose: To develop a test battery for quickly and safely assessing diverse indices of neuromuscular performance. I. Quickly: Battery of tests can be completed in approx.30-40 min. II. Safely: a) No eccentric muscle actions or impact forces. b) Tests present little challenge to postural stability. III. Diverse indices: a) Strength: Excellent reliability (ICC = 0.99) b) Central activation: Very good reliability (ICC = 0.87) c) Power: Excellent reliability (ICC = 0.99) d) Endurance: Total work has excellent reliability (ICC = 0.99) e) Force steadiness: Poor reliability (ICC = 0.20 - 0.60) National

  10. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  11. A functional neuroimaging study of the clinical reasoning of medical students.

    PubMed

    Chang, Hyung-Joo; Kang, June; Ham, Byung-Joo; Lee, Young-Mee

    2016-12-01

    As clinical reasoning is a fundamental competence of physicians for good clinical practices, medical academics have endeavored to teach reasoning skills to undergraduate students. However, our current understanding of student-level clinical reasoning is limited, mainly because of the lack of evaluation tools for this internal cognitive process. This functional magnetic resonance imaging (fMRI) study aimed to examine the clinical reasoning processes of medical students in response to problem-solving questions. We recruited 24 2nd-year medical students who had completed their preclinical curriculum. They answered 40 clinical vignette-based multiple-choice questions during fMRI scanning. We compared the imaging data for 20 problem-solving questions (reasoning task) and 20 recall questions (recall task). Compared to the recall task, the reasoning task resulted in significantly greater activation in nine brain regions, including the dorsolateral prefrontal cortex and inferior parietal cortex, which are known to be associated with executive function and deductive reasoning. During the recall task, significant activation was observed in the brain regions that are related to memory and emotions, including the amygdala and ventromedial prefrontal cortex. Our results support that medical students mainly solve clinical questions with deductive reasoning involving prior knowledge structures and executive functions. The problem-solving questions induced the students to utilize higher cognitive functions compared with the recall questions. Interestingly, the results suggested that the students experienced some emotional distress while they were solving the recall questions. In addition, these results suggest that fMRI is a promising research tool for investigating students' cognitive processes.

  12. Use of a structured functional evaluation process for independent medical evaluations of claimants presenting with disabling mental illness: rationale and design for a multi-center reliability study.

    PubMed

    Bachmann, Monica; de Boer, Wout; Schandelmaier, Stefan; Leibold, Andrea; Marelli, Renato; Jeger, Joerg; Hoffmann-Richter, Ulrike; Mager, Ralph; Schaad, Heinz; Zumbrunn, Thomas; Vogel, Nicole; Bänziger, Oskar; Busse, Jason W; Fischer, Katrin; Kunz, Regina

    2016-07-29

    Work capacity evaluations by independent medical experts are widely used to inform insurers whether injured or ill workers are capable of engaging in competitive employment. In many countries, evaluation processes lack a clearly structured approach, standardized instruments, and an explicit focus on claimants' functional abilities. Evaluation of subjective complaints, such as mental illness, present additional challenges in the determination of work capacity. We have therefore developed a process for functional evaluation of claimants with mental disorders which complements usual psychiatric evaluation. Here we report the design of a study to measure the reliability of our approach in determining work capacity among patients with mental illness applying for disability benefits. We will conduct a multi-center reliability study, in which 20 psychiatrists trained in our functional evaluation process will assess 30 claimants presenting with mental illness for eligibility to receive disability benefits [Reliability of Functional Evaluation in Psychiatry, RELY-study]. The functional evaluation process entails a five-step structured interview and a reporting instrument (Instrument of Functional Assessment in Psychiatry [IFAP]) to document the severity of work-related functional limitations. We will videotape all evaluations which will be viewed by three psychiatrists who will independently rate claimants' functional limitations. Our primary outcome measure is the evaluation of claimant's work capacity as a percentage (0 to 100 %), and our secondary outcomes are the 12 mental functions and 13 functional capacities assessed by the IFAP-instrument. Inter-rater reliability of four psychiatric experts will be explored using multilevel models to estimate the intraclass correlation coefficient (ICC). Additional analyses include subgroups according to mental disorder, the typicality of claimants, and claimant perceived fairness of the assessment process. We hypothesize that a structured functional approach will show moderate reliability (ICC ≥ 0.6) of psychiatric evaluation of work capacity. Enrollment of actual claimants with mental disorders referred for evaluation by disability/accident insurers will increase the external validity of our findings. Finding moderate levels of reliability, we will continue with a randomized trial to test the reliability of a structured functional approach versus evaluation-as-usual.

  13. Reliability and validity of the Korean version of the Short Musculoskeletal Function Assessment questionnaire for patients with musculoskeletal disorder.

    PubMed

    Jung, Kyoung-Sim; Jung, Jin-Hwa; In, Tae-Sung; Cho, Hwi-Young

    2016-09-01

    [Purpose] The purpose of this study was to establish the reliability and validity of the Short Musculoskeletal Function Assessment questionnaire, which was translated into Korean, for patients with musculoskeletal disorder. [Subjects and Methods] Fifty-five subjects (26 males and 29 females) with musculoskeletal diseases participated in the study. The Short Musculoskeletal Function Assessment questionnaire focuses on a limited range of physical functions and includes a dysfunction index and a bother index. Reliability was determined using the intraclass correlation coefficient, and validity was examined by correlating short musculoskeletal function assessment scores with the 36-item Short-Form Health Survey (SF-36) score. [Results] The reliability was 0.97 for the dysfunction index and 0.94 for the bother index. Validity was established by comparison with Korean version of the SF-36. [Conclusion] This study demonstrated that the Korean version of the Short Musculoskeletal Function Assessment questionnaire is a reliable and valid instrument for the assessment of musculoskeletal disorders.

  14. Dynamic Response of Functionally Graded Carbon Nanotube Reinforced Sandwich Plate

    NASA Astrophysics Data System (ADS)

    Mehar, Kulmani; Panda, Subrata Kumar

    2018-03-01

    In this article, the dynamic response of the carbon nanotube-reinforced functionally graded sandwich composite plate has been studied numerically with the help of finite element method. The face sheets of the sandwich composite plate are made of carbon nanotube- reinforced composite for two different grading patterns whereas the core phase is taken as isotropic material. The final properties of the structure are calculated using the rule of mixture. The geometrical model of the sandwich plate is developed and discretized suitably with the help of available shell element in ANSYS library. Subsequently, the corresponding numerical dynamic responses computed via batch input technique (parametric design language code in ANSYS) of ANSYS including Newmark’s integration scheme. The stability of the sandwich structural numerical model is established through the proper convergence study. Further, the reliability of the sandwich model is checked by comparison study between present and available results from references. As a final point, some numerical problems have been solved to examine the effect of different design constraints (carbon nanotube distribution pattern, core to face thickness ratio, volume fractions of the nanotube, length to thickness ratio, aspect ratio and constraints at edges) on the time-responses of sandwich plate.

  15. Language Ability and Verbal and Nonverbal Executive Functioning in Deaf Students Communicating in Spoken English

    ERIC Educational Resources Information Center

    Remine, Maria D.; Care, Esther; Brown, P. Margaret

    2008-01-01

    The internal use of language during problem solving is considered to play a key role in executive functioning. This role provides a means for self-reflection and self-questioning during the formation of rules and plans and a capacity to control and monitor behavior during problem-solving activity. Given that increasingly sophisticated language is…

  16. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  17. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  18. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus

    PubMed Central

    Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-01-01

    Objective To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. Methods A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. Results A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach’s α for the total HPSS was 0.83. Conclusions The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. PMID:27059292

  19. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus.

    PubMed

    Wu, Fei-Ling; Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-06-01

    To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach's α for the total HPSS was 0.83. The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. © The Author(s) 2016.

  20. Pre-Proposal Assessment of Reliability for Spacecraft Docking with Limited Information

    NASA Technical Reports Server (NTRS)

    Brall, Aron

    2013-01-01

    This paper addresses the problem of estimating the reliability of a critical system function as well as its impact on the system reliability when limited information is available. The approach addresses the basic function reliability, and then the impact of multiple attempts to accomplish the function. The dependence of subsequent attempts on prior failure to accomplish the function is also addressed. The autonomous docking of two spacecraft was the specific example that generated the inquiry, and the resultant impact on total reliability generated substantial interest in presenting the results due to the relative insensitivity of overall performance to basic function reliability and moderate degradation given sufficient attempts to try and accomplish the required goal. The application of the methodology allows proper emphasis on the characteristics that can be estimated with some knowledge, and to insulate the integrity of the design from those characteristics that can't be properly estimated with any rational value of uncertainty. The nature of NASA's missions contains a great deal of uncertainty due to the pursuit of new science or operations. This approach can be applied to any function where multiple attempts at success, with or without degradation, are allowed.

  1. Probabilistic Design of a Wind Tunnel Model to Match the Response of a Full-Scale Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Stroud, W. Jefferson; Krishnamurthy, T.; Spain, Charles V.; Naser, Ahmad S.

    2005-01-01

    approach is presented for carrying out the reliability-based design of a plate-like wing that is part of a wind tunnel model. The goal is to design the wind tunnel model to match the stiffness characteristics of the wing box of a flight vehicle while satisfying strength-based risk/reliability requirements that prevents damage to the wind tunnel model and fixtures. The flight vehicle is a modified F/A-18 aircraft. The design problem is solved using reliability-based optimization techniques. The objective function to be minimized is the difference between the displacements of the wind tunnel model and the corresponding displacements of the flight vehicle. The design variables control the thickness distribution of the wind tunnel model. Displacements of the wind tunnel model change with the thickness distribution, while displacements of the flight vehicle are a set of fixed data. The only constraint imposed is that the probability of failure is less than a specified value. Failure is assumed to occur if the stress caused by aerodynamic pressure loading is greater than the specified strength allowable. Two uncertain quantities are considered: the allowable stress and the thickness distribution of the wind tunnel model. Reliability is calculated using Monte Carlo simulation with response surfaces that provide approximate values of stresses. The response surface equations are, in turn, computed from finite element analyses of the wind tunnel model at specified design points. Because the response surface approximations were fit over a small region centered about the current design, the response surfaces were refit periodically as the design variables changed. Coarse-grained parallelism was used to simultaneously perform multiple finite element analyses. Studies carried out in this paper demonstrate that this scheme of using moving response surfaces and coarse-grained computational parallelism reduce the execution time of the Monte Carlo simulation enough to make the design problem tractable. The results of the reliability-based designs performed in this paper show that large decreases in the probability of stress-based failure can be realized with only small sacrifices in the ability of the wind tunnel model to represent the displacements of the full-scale vehicle.

  2. Performance in Mathematical Problem Solving as a Function of Comprehension and Arithmetic Skills

    ERIC Educational Resources Information Center

    Voyer, Dominic

    2011-01-01

    Many factors influence a student's performance in word (or textbook) problem solving in class. Among them is the comprehension process the pupils construct during their attempt to solve the problem. The comprehension process may include some less formal representations, based on pupils' real-world knowledge, which support the construction of a…

  3. The Effects of Schema-Based Instruction on the Mathematical Problem Solving of Students with Emotional and Behavioral Disorders

    ERIC Educational Resources Information Center

    Peltier, Corey; Vannest, Kimberly J.

    2018-01-01

    The current study examines the effects of schema instruction on the problem-solving performance of four second-grade students with emotional and behavioral disorders. The existence of a functional relationship between the schema instruction intervention and problem-solving accuracy in mathematics is examined through a single case experiment using…

  4. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    NASA Astrophysics Data System (ADS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-06-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment's reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  5. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-06-08

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime ofmore » aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.« less

  6. The design of optimal electric power demand management contracts

    NASA Astrophysics Data System (ADS)

    Fahrioglu, Murat

    1999-11-01

    Our society derives a quantifiable benefit from electric power. In particular, forced outages or blackouts have enormous consequences on society, one of which is loss of economic surplus. Electric utilities try to provide reliable supply of electric power to their customers. Maximum customer benefit derives from minimum cost and sufficient supply availability. Customers willing to share in "availability risk" can derive further benefit by participating in controlled outage programs. Specifically, whenever utilities foresee dangerous loading patterns, there is a need for a rapid reduction in demand either system-wide or at specific locations. The utility needs to get relief in order to solve its problems quickly and efficiently. This relief can come from customers who agree to curtail their loads upon request in exchange for an incentive fee. This thesis shows how utilities can get efficient load relief while maximizing their economic benefit. This work also shows how estimated customer cost functions can be calibrated, using existing utility data, to help in designing efficient demand management contracts. In order to design such contracts, optimal mechanism design is adopted from "Game Theory" and applied to the interaction between a utility and its customers. The idea behind mechanism design is to design an incentive structure that encourages customers to sign up for the right contract and reveal their true value of power. If a utility has demand management contracts with customers at critical locations, most operational problems can be solved efficiently. This thesis illustrates how locational attributes of customers incorporated into demand management contract design can have a significant impact in solving system problems. This kind of demand management contracts can also be used by an Independent System Operator (ISO). During times of congestion a loss of economic surplus occurs. When the market is too slow or cannot help relieve congestion, demand management can help solve the problem. Another tool the ISO requires for security purposes is reserves. Even though demand management contracts may not be a good substitute for spinning reserves, they are adequate to augment or replace supplemental and backup reserves.

  7. Development of C60-based labeling reagents for the determination of low-molecular-weight compounds by matrix assisted laser desorption ionization mass (I): Determination of amino acids in microliter biofluids.

    PubMed

    Wu, Pin; Xiao, Hua-Ming; Ding, Jun; Deng, Qian-Yun; Zheng, Fang; Feng, Yu-Qi

    2017-04-01

    Quantification of low molecular weight compounds (<800 Da) using matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI MS) is challenging due to the matrix signal interference at low m/z region and poor reproducibility of MS responses. In this study, a C60 labeling-MALDI MS strategy was proposed for the fast, sensitive and reliable determination of amino acids (AAs) in biofluids. An N-hydroxysuccinimide functionalized C60 was synthesized as the labeling reagent and added as an 880 Da tag to AAs; a carboxyl acid containing C60 was employed as the internal standards to normalize MS variations. This solved the inherent problems of MALDI MS for small molecule analysis. The entire analytical procedure-which consisted of simple protein precipitation and 10 min of derivatization in a microwave prior to the MALDI MS analysis-could be accomplished within 20 min with high throughput and great sample matrix tolerance. AA quantification showed good linearity from 0.7 to 70.0 μM with correlation coefficients (R) larger than 0.9954. The limits of detection were 70.0-300.0 fmol. Good reproducibility and reliability of the method were demonstrated by intra-day and inter-day precision with relative standard deviations less than 13.8%, and the recovery in biofluid ranged from 80.4% to 106.8%. This approach could be used in 1 μL of urine, serum, plasma, whole blood, and cerebrospinal fluid. Most importantly, the C60 labeling strategy is a universal approach for MALDI MS analysis of various LMW compounds because functionalized C60 is now available on demand. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Control of Ga-oxide interlayer growth and Ga diffusion in SiO2/GaN stacks for high-quality GaN-based metal-oxide-semiconductor devices with improved gate dielectric reliability

    NASA Astrophysics Data System (ADS)

    Yamada, Takahiro; Watanabe, Kenta; Nozaki, Mikito; Yamada, Hisashi; Takahashi, Tokio; Shimizu, Mitsuaki; Yoshigoe, Akitaka; Hosoi, Takuji; Shimura, Takayoshi; Watanabe, Heiji

    2018-01-01

    A simple and feasible method for fabricating high-quality and highly reliable GaN-based metal-oxide-semiconductor (MOS) devices was developed. The direct chemical vapor deposition of SiO2 films on GaN substrates forming Ga-oxide interlayers was carried out to fabricate SiO2/GaO x /GaN stacked structures. Although well-behaved hysteresis-free GaN-MOS capacitors with extremely low interface state densities below 1010 cm-2 eV-1 were obtained by postdeposition annealing, Ga diffusion into overlying SiO2 layers severely degraded the dielectric breakdown characteristics. However, this problem was found to be solved by rapid thermal processing, leading to the superior performance of the GaN-MOS devices in terms of interface quality, insulating property, and gate dielectric reliability.

  9. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  10. The Intelligent Control System and Experiments for an Unmanned Wave Glider.

    PubMed

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the "Ocean Rambler" UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified.

  11. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks

    PubMed Central

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-01-01

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496

  12. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks.

    PubMed

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-04-06

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

  13. The Intelligent Control System and Experiments for an Unmanned Wave Glider

    PubMed Central

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the “Ocean Rambler” UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified. PMID:28005956

  14. Stress and Intimate Partner Aggression

    PubMed Central

    Eckhardt, Christopher I.; Parrott, Dominic J.

    2016-01-01

    Evidence suggests that stressed couples also tend to be aggressive couples. Chronic external stresses interact with individuals’ dispositional and regulatory deficiencies, resulting in a spillover of these stresses into the relationship. High individual stress in combination with problematic interaction styles and problem-solving abilities increases the likelihood of IPA. We applied the I3 Model to better organize the instigating, impelling, and inhibiting factors and processes that moderate the stress-IPA association. Evidence suggests that certain forms of stress, such as IPA victimization, reliably instigate IPA perpetration, with weak inhibitory processes and impaired problem solving moderating the stress-IPA association. More research is needed that specifies the ‘perfect storm’ of factors that increase our understanding of how, and for whom, stress increases IPA risk. PMID:28497106

  15. H∞ robust fault-tolerant controller design for an autonomous underwater vehicle's navigation control system

    NASA Astrophysics Data System (ADS)

    Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian

    2010-03-01

    In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.

  16. Stress and Intimate Partner Aggression.

    PubMed

    Eckhardt, Christopher I; Parrott, Dominic J

    2017-02-01

    Evidence suggests that stressed couples also tend to be aggressive couples. Chronic external stresses interact with individuals' dispositional and regulatory deficiencies, resulting in a spillover of these stresses into the relationship. High individual stress in combination with problematic interaction styles and problem-solving abilities increases the likelihood of IPA. We applied the I 3 Model to better organize the instigating, impelling, and inhibiting factors and processes that moderate the stress-IPA association. Evidence suggests that certain forms of stress, such as IPA victimization, reliably instigate IPA perpetration, with weak inhibitory processes and impaired problem solving moderating the stress-IPA association. More research is needed that specifies the 'perfect storm' of factors that increase our understanding of how, and for whom, stress increases IPA risk.

  17. Laser Processed Condensing Heat Exchanger Technology Development

    NASA Technical Reports Server (NTRS)

    Hansen, Scott; Wright, Sarah; Wallace, Sarah; Hamilton, Tanner; Dennis, Alexander; Zuhlke, Craig; Roth, Nick; Sanders, John

    2017-01-01

    The reliance on non-permanent coatings in Condensing Heat Exchanger (CHX) designs is a significant technical issue to be solved before long-duration spaceflight can occur. Therefore, high reliability CHXs have been identified by the Evolvable Mars Campaign (EMC) as critical technologies needed to move beyond low earth orbit. The Laser Processed Condensing Heat Exchanger project aims to solve these problems through the use of femtosecond laser processed surfaces, which have unique wetting properties and potentially exhibit anti-microbial growth properties. These surfaces were investigated to identify if they would be suitable candidates for a replacement CHX surface. Among the areas researched in this project include microbial growth testing, siloxane flow testing in which laser processed surfaces were exposed to siloxanes in an air stream, and manufacturability.

  18. Impact of image quality on reliability of the measurements of left ventricular systolic function and global longitudinal strain in 2D echocardiography

    PubMed Central

    Nagata, Yasufumi; Kado, Yuichiro; Onoue, Takeshi; Otani, Kyoko; Nakazono, Akemi; Otsuji, Yutaka; Takeuchi, Masaaki

    2018-01-01

    Background Left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS) play important roles in diagnosis and management of cardiac diseases. However, the issue of the accuracy and reliability of LVEF and GLS remains to be solved. Image quality is one of the most important factors affecting measurement variability. The aim of this study was to investigate whether improved image quality could reduce observer variability. Methods Two sets of three apical images were acquired using relatively old- and new-generation ultrasound imaging systems (Vivid 7 and Vivid E95) in 308 subjects. Image quality was assessed by endocardial border delineation index (EBDI) using a 3-point scoring system. Three observers measured the LVEF and GLS, and these values and inter-observer variability were investigated. Results Image quality was significantly better with Vivid E95 (EBDI: 26.8 ± 5.9) than that with Vivid 7 (22.8 ± 6.3, P < 0.0001). Regarding the inter-observer variability of LVEF, the r-value, bias, 95% limit of agreement and intra-class correlation coefficient for Vivid 7 were comparable to those for Vivid E95. The % variabilities were significantly lower for Vivid E95 (5.3–6.5%) than those for Vivid 7 (6.5–7.5%). Regarding GLS, all observer variability parameters were better for Vivid E95 than for Vivid 7. Improvements in image quality yielded benefits to both LVEF and GLS measurement reliability. Multivariate analysis showed that image quality was indeed an important factor of observer variability in the measurement of LVEF and GLS. Conclusions The new-generation ultrasound imaging system offers improved image quality and reduces inter-observer variability in the measurement of LVEF and GLS. PMID:29432198

  19. Impact of image quality on reliability of the measurements of left ventricular systolic function and global longitudinal strain in 2D echocardiography.

    PubMed

    Nagata, Yasufumi; Kado, Yuichiro; Onoue, Takeshi; Otani, Kyoko; Nakazono, Akemi; Otsuji, Yutaka; Takeuchi, Masaaki

    2018-03-01

    Left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS) play important roles in diagnosis and management of cardiac diseases. However, the issue of the accuracy and reliability of LVEF and GLS remains to be solved. Image quality is one of the most important factors affecting measurement variability. The aim of this study was to investigate whether improved image quality could reduce observer variability. Two sets of three apical images were acquired using relatively old- and new-generation ultrasound imaging systems (Vivid 7 and Vivid E95) in 308 subjects. Image quality was assessed by endocardial border delineation index (EBDI) using a 3-point scoring system. Three observers measured the LVEF and GLS, and these values and inter-observer variability were investigated. Image quality was significantly better with Vivid E95 (EBDI: 26.8 ± 5.9) than that with Vivid 7 (22.8 ± 6.3, P  < 0.0001). Regarding the inter-observer variability of LVEF, the r -value, bias, 95% limit of agreement and intra-class correlation coefficient for Vivid 7 were comparable to those for Vivid E95. The % variabilities were significantly lower for Vivid E95 (5.3-6.5%) than those for Vivid 7 (6.5-7.5%). Regarding GLS, all observer variability parameters were better for Vivid E95 than for Vivid 7. Improvements in image quality yielded benefits to both LVEF and GLS measurement reliability. Multivariate analysis showed that image quality was indeed an important factor of observer variability in the measurement of LVEF and GLS. The new-generation ultrasound imaging system offers improved image quality and reduces inter-observer variability in the measurement of LVEF and GLS. © 2018 The authors.

  20. Design and Research of the Sewage Treatment Control System

    NASA Astrophysics Data System (ADS)

    Chu, J.; Hu, W. W.

    Due to the rapid development of China's economy, the water pollution has become a problem that we have to face. In particular, how to deal with industrial wastewater has become a top priority. In wastewater treatment, the control system based on PLC has met the design requirement in real-time, reliability, precision and so on. The integration of sequence control and process control in PLC, has the characteristics of high reliability, simple network, convenient and flexible use. PLC is a powerful tool for small and medium-sized industrial automation. Therefore, the sewage treatment control system take PLC as the core of control system, can nicely solve the problem of industrial wastewater in a certain extent.

  1. Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubauer.J.; Lundstrom, B.; Simpson, M.

    2014-06-01

    The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distributionmore » feeder simulation.« less

  2. Intelligent pump test system based on virtual instrument

    NASA Astrophysics Data System (ADS)

    Ma, Jungong; Wang, Shifu; Wang, Zhanlin

    2003-09-01

    The intelligent pump system is the key component of the aircraft hydraulic system that can solve the problem, such as the temperature sharply increasing. As the performance of the intelligent pump directly determines that of the aircraft hydraulic system and seriously affects fly security and reliability. So it is important to test all kinds of performance parameters of intelligent pump during design and development, while the advanced, reliable and complete test equipments are the necessary instruments for achieving the goal. In this paper, the application of virtual instrument and computer network technology in aircraft intelligent pump test is presented. The composition of the hardware, software, hydraulic circuit in this system are designed and implemented.

  3. [Development of a proverb test for assessment of concrete thinking problems in schizophrenic patients].

    PubMed

    Barth, A; Küfferle, B

    2001-11-01

    Concretism is considered an important aspect of schizophrenic thought disorder. Traditionally it is measured using the method of proverb interpretation, in which metaphoric proverbs are presented with the request that the subject tell its meaning. Interpretations are recorded and scored on concretistic tendencies. However, this method has two problems: its reliability is doubtful and it is rather complicated to perform. In this paper, a new version of a multiple choice proverb test is presented which can solve these problems in a reliable and economic manner. Using the new test, it is has been shown that schizophrenic patients have greater deficits in proverb interpretation than depressive patients.

  4. Changes in Problem-Solving Capacity and Association With Spontaneous Brain Activity After a Single Electroconvulsive Treatment in Major Depressive Disorder.

    PubMed

    Du, Lian; Qiu, Haitang; Liu, Haixia; Zhao, Wenjing; Tang, Yong; Fu, Yixiao; Li, Xirong; Qiu, Tian; Hu, Hua; Meng, Huaqing; Luo, Qinghua

    2016-03-01

    Modified electroconvulsive therapy (MECT) has been regarded as the most effective antidepressant therapy, despite its cognitive side effects. However, how MECT influences problem-solving capacity in major depressive disorder (MDD), as well as its underlying neurobiological mechanisms, remains unclear. The present study aimed to assess alterations in problem-solving capacity after MECT and to explore spontaneous brain activity using amplitudes of low-frequency fluctuations (ALFF)/fractional ALFF. Thirteen first-episode, treatment-naive MDD patients treated by MECT were recruited. We collected resting-state functional magnetic resonance imaging, and we evaluated their Modified Card Sorting Test performance before and after single-session MECT. Another 11 MDD patients without MECT were also recruited and interviewed with Modified Card Sorting Test twice as a control group. After a single MECT, MDD patients showed significantly decreased ALFF in the right cerebellar posterior lobe. Compared to the control group, perseverative errors significantly decreased after MECT, controlling for practice effects. Some cognitive functional changes significantly correlated to changed ALFF in several brain regions, including Brodmann areas BA9, BA19, BA 21, and BA48, right thalamus, left cerebellum, and right postcentral gyrus. The MECT could improve problem-solving capacity, even after controlling for practice effects, and it could induce changes in spontaneous brain activity. These changes in cognitive functioning might result from changes in the cerebral functions of some regions, including frontal cortex, a key region for problem-solving capacity.

  5. The reliability and validity of a sexual functioning questionnaire.

    PubMed

    Corty, E W; Althof, S E; Kurit, D M

    1996-01-01

    The present study assessed the reliability and validity of a measure of sexual functioning, the CMSH-SFQ, for male patients and their partners. The CMSH-SFQ measures erectile and orgasmic functioning, sexual drive, frequency of sexual behavior, and sexual satisfaction. Test-retest reliability was assessed with 19 males and 19 females for the baseline CMSH-SFQ. Criterion validity was measured by comparing the answers of 25 male patients to those of their partners at baseline and follow-up. The majority of items had acceptable levels of reliability and validity. The CMSH-SFQ provides a reliable and valid device that can be used to measure global sexual functioning in men and their partners and may be used to evaluate the efficacy of treatments for sexual dysfunctions. Limitations and suggestions for use of the CMSH-SFQ are addressed.

  6. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  7. Technology Use for Diabetes Problem Solving in Adolescents with Type 1 Diabetes: Relationship to Glycemic Control

    PubMed Central

    Kumah-Crystal, Yaa A.; Hood, Korey K.; Ho, Yu-Xian; Lybarger, Cindy K.; O'Connor, Brendan H.; Rothman, Russell L.

    2015-01-01

    Abstract Background: This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). Subjects and Methods: A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Results: Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β=−0.26, P<0.01), Diabetes Adolescent Problem Solving Questionnaire (β=−0.26, P=0.01), and TUPS (β=0.26, P=0.01). Aside from the Diabetes Self-Care Inventory—Revised, each block added significantly to the model R2. The final model R2 was 0.22 for modeling A1C (P<0.001). Conclusions: Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes. PMID:25826706

  8. Technology Use for Diabetes Problem Solving in Adolescents with Type 1 Diabetes: Relationship to Glycemic Control.

    PubMed

    Kumah-Crystal, Yaa A; Hood, Korey K; Ho, Yu-Xian; Lybarger, Cindy K; O'Connor, Brendan H; Rothman, Russell L; Mulvaney, Shelagh A

    2015-07-01

    This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β = -0.26, P < 0.01), Diabetes Adolescent Problem Solving Questionnaire (β = -0.26, P = 0.01), and TUPS (β = 0.26, P = 0.01). Aside from the Diabetes Self-Care Inventory--Revised, each block added significantly to the model R(2). The final model R(2) was 0.22 for modeling A1C (P < 0.001). Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes.

  9. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  10. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  11. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  12. Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (4).

    PubMed

    Murase, Kenya

    2016-01-01

    Partial differential equations are often used in the field of medical physics. In this (final) issue, the methods for solving the partial differential equations were introduced, which include separation of variables, integral transform (Fourier and Fourier-sine transforms), Green's function, and series expansion methods. Some examples were also introduced, in which the integral transform and Green's function methods were applied to solving Pennes' bioheat transfer equation and the Fourier series expansion method was applied to Navier-Stokes equation for analyzing the wall shear stress in blood vessels.Finally, the author hopes that this series will be helpful for people who engage in medical physics.

  13. The Role of Executive Function in Arithmetic Problem-Solving Processes: A Study of Third Graders

    ERIC Educational Resources Information Center

    Viterbori, Paola; Traverso, Laura; Usai, M. Carmen

    2017-01-01

    This study investigated the roles of different executive function (EF) components (inhibition, shifting, and working memory) in 2-step arithmetic word problem solving. A sample of 139 children aged 8 years old and regularly attending the 3rd grade of primary school were tested on 6 EF tasks measuring different EF components, a reading task and a…

  14. Bio-Inspired Human-Level Machine Learning

    DTIC Science & Technology

    2015-10-25

    extensions to high-level cognitive functions such as anagram solving problem. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...extensions to high-level cognitive functions such as anagram solving problem. We expect that the bio-inspired human-level machine learning combined with...numbers of 1011 neurons and 1014 synaptic connections in the human brain. In previous work, we experimentally demonstrated the feasibility of cognitive

  15. The Use of Museum Based Science Centres to Expose Primary School Students in Developing Countries to Abstract and Complex Concepts of Nanoscience and Nanotechnology

    ERIC Educational Resources Information Center

    Saidi, Trust; Sigauke, Esther

    2017-01-01

    Nanotechnology is an emerging technology, and it is regarded as the basis for the next industrial revolution. In developing countries, nanotechnology promises to solve everyday challenges, such as the provision of potable water, reliable energy sources and effective medication. However, there are several challenges in the exploitation of…

  16. Influence of Planning Time and First-Move Strategy on Tower of Hanoi Problem-Solving Performance of Mentally Retarded Young Adults and Nonretarded Children.

    ERIC Educational Resources Information Center

    Spitz, Herman H.; And Others

    1985-01-01

    In two experiments using a computer-interfaced problem, planning time of 50 retarded young adults was as long as or longer than that of higher performing nonretarded children. In neither group was there a reliable correlation between planning time and performance. There were group differences in preferred strategies, possibly associated with…

  17. "Fly-by-Wireless" : A Revolution in Aerospace Architectures for Instrumentation and Control

    NASA Technical Reports Server (NTRS)

    Studor, George F.

    2007-01-01

    The conference presentation provides background information on Fly-by-Wireless technologies as well as reasons for implementation, CANEUS project goals, cost of change for instrumentation, reliability, focus areas, conceptual Hybrid SHMS architecture for future space habitats, real world problems that the technology can solve, evolution of Micro-WIS systems, and a WLEIDS system overview and end-to-end system design.

  18. CAUSE Resiliency (West Coast) Experiment Final Report

    DTIC Science & Technology

    2012-10-01

    implemented in BCeMap and can therefore consume alerting messages direct from MASAS. This would solve the issue with the update frequency and speed of the...in production for use by the Provincial Emergency Operations Centres and brings together multiple static layers together with several dynamic data...executive order established the requirement for an “effective, reliable, integrated, flexible, and comprehensive system to alert and warn the

  19. Developing and validating the Communication Function Classification System for individuals with cerebral palsy

    PubMed Central

    HIDECKER, MARY JO COOLEY; PANETH, NIGEL; ROSENBAUM, PETER L; KENT, RAYMOND D; LILLIE, JANET; EULENBERG, JOHN B; CHESTER, KEN; JOHNSON, BRENDA; MICHALSEN, LAUREN; EVATT, MORGAN; TAYLOR, KARA

    2011-01-01

    Aim The purpose of this study was to create and validate a Communication Function Classification System (CFCS) for children with cerebral palsy (CP) that can be used by a wide variety of individuals who are interested in CP. This paper reports the content validity, interrater reliability, and test–retest reliability of the CFCS for children with CP. Method An 11-member development team created comprehensive descriptions of the CFCS levels, and four nominal groups comprising 27 participants critiqued these levels. Within a Delphi survey, 112 participants commented on the clarity and usefulness of the CFCS. Interrater reliability was completed by 61 professionals and 68 parents/relatives who classified 69 children with CP aged 2 to 18 years. Test–retest reliability was completed by 48 professionals who allowed at least 2 weeks between classifications. The participants who assessed the CFCS were all relevant stakeholders: adults with CP, parents of children with CP, educators, occupational therapists, physical therapists, physicians, and speech–language pathologists. Results The interrater reliability of the CFCS was 0.66 between two professionals and 0.49 between a parent and a professional. Professional interrater reliability improved to 0.77 for classification of children older than 4 years. The test–retest reliability was 0.82. Interpretation The CFCS demonstrates content validity and shows very good test–retest reliability, good professional interrater reliability, and moderate parent–professional interrater reliability. Combining the CFCS with the Gross Motor Function Classification System and the Manual Ability Classification System contributes to a functional performance view of daily life for individuals with CP, in accordance with the World Health Organization’s International Classification of Functioning, Disability and Health. PMID:21707596

  20. Modulation stability and dispersive optical soliton solutions of higher order nonlinear Schrödinger equation and its applications in mono-mode optical fibers

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad; Seadawy, Aly R.; Lu, Dianchen

    2018-01-01

    In mono-mode optical fibers, the higher order non-linear Schrödinger equation (NLSE) describes the propagation of enormously short light pulses. We constructed optical solitons and, solitary wave solutions of higher order NLSE mono-mode optical fibers via employing modified extended mapping method which has important applications in Mathematics and physics. Furthermore, the formation conditions are also given on parameters in which optical bright and dark solitons can exist for this media. The moment of the obtained solutions are also given graphically, that helps to realize the physical phenomena's of this model. The modulation instability analysis is utilized to discuss the model stability, which verifies that all obtained solutions are exact and stable. Many other such types of models arising in applied sciences can also be solved by this reliable, powerful and effective method. The method can also be functional to other sorts of higher order nonlinear problems in contemporary areas of research.

  1. Decentralized Fuzzy MPC on Spatial Power Control of a Large PHWR

    NASA Astrophysics Data System (ADS)

    Liu, Xiangjie; Jiang, Di; Lee, Kwang Y.

    2016-08-01

    Reliable power control for stabilizing the spatial oscillations is quite important for ensuring the safe operation of a modern pressurized heavy water reactor (PHWR), since these spatial oscillations can cause “flux tilting” in the reactor core. In this paper, a decentralized fuzzy model predictive control (DFMPC) is proposed for spatial control of PHWR. Due to the load dependent dynamics of the nuclear power plant, fuzzy modeling is used to approximate the nonlinear process. A fuzzy Lyapunov function and “quasi-min-max” strategy is utilized in designing the DFMPC, to reduce the conservatism. The plant-wide stability is achieved by the asymptotically positive realness constraint (APRC) for this decentralized MPC. The solving optimization problem is based on a receding horizon scheme involving the linear matrix inequalities (LMIs) technique. Through dynamic simulations, it is demonstrated that the designed DFMPC can effectively suppress spatial oscillations developed in PHWR, and further, shows the advantages over the typical parallel distributed compensation (PDC) control scheme.

  2. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  3. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    PubMed

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  4. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty

    PubMed Central

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946

  5. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  6. Crystal structure of a Schistosoma mansoni septin reveals the phenomenon of strand slippage in septins dependent on the nature of the bound nucleotide.

    PubMed

    Zeraik, Ana E; Pereira, Humberto M; Santos, Yuri V; Brandão-Neto, José; Spoerner, Michael; Santos, Maiara S; Colnago, Luiz A; Garratt, Richard C; Araújo, Ana P U; DeMarco, Ricardo

    2014-03-14

    Septins are filament-forming GTP-binding proteins involved in important cellular events, such as cytokinesis, barrier formation, and membrane remodeling. Here, we present two crystal structures of the GTPase domain of a Schistosoma mansoni septin (SmSEPT10), one bound to GDP and the other to GTP. The structures have been solved at an unprecedented resolution for septins (1.93 and 2.1 Å, respectively), which has allowed for unambiguous structural assignment of regions previously poorly defined. Consequently, we provide a reliable model for functional interpretation and a solid foundation for future structural studies. Upon comparing the two complexes, we observe for the first time the phenomenon of a strand slippage in septins. Such slippage generates a front-back communication mechanism between the G and NC interfaces. These data provide a novel mechanistic framework for the influence of nucleotide binding to the GTPase domain, opening new possibilities for the study of the dynamics of septin filaments.

  7. Fluidic Vectoring of a Planar Incompressible Jet Flow

    NASA Astrophysics Data System (ADS)

    Mendez, Miguel Alfonso; Scelzo, Maria Teresa; Enache, Adriana; Buchlin, Jean-Marie

    2018-06-01

    This paper presents an experimental, a numerical and a theoretical analysis of the performances of a fluidic vectoring device for controlling the direction of a turbulent, bi-dimensional and low Mach number (incompressible) jet flow. The investigated design is the co-flow secondary injection with Coanda surface, which allows for vectoring angles up to 25° with no need of moving mechanical parts. A simple empirical model of the vectoring process is presented and validated via experimental and numerical data. The experiments consist of flow visualization and image processing for the automatic detection of the jet centerline; the numerical simulations are carried out solving the Unsteady Reynolds Average Navier- Stokes (URANS) closed with the k - ω SST turbulence model, using the PisoFoam solver from OpenFOAM. The experimental validation on three different geometrical configurations has shown that the model is capable of providing a fast and reliable evaluation of the device performance as a function of the operating conditions.

  8. A GRID OF THREE-DIMENSIONAL STELLAR ATMOSPHERE MODELS OF SOLAR METALLICITY. I. GENERAL PROPERTIES, GRANULATION, AND ATMOSPHERIC EXPANSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trampedach, Regner; Asplund, Martin; Collet, Remo

    2013-05-20

    Present grids of stellar atmosphere models are the workhorses in interpreting stellar observations and determining their fundamental parameters. These models rely on greatly simplified models of convection, however, lending less predictive power to such models of late-type stars. We present a grid of improved and more reliable stellar atmosphere models of late-type stars, based on deep, three-dimensional (3D), convective, stellar atmosphere simulations. This grid is to be used in general for interpreting observations and improving stellar and asteroseismic modeling. We solve the Navier Stokes equations in 3D and concurrent with the radiative transfer equation, for a range of atmospheric parameters,more » covering most of stellar evolution with convection at the surface. We emphasize the use of the best available atomic physics for quantitative predictions and comparisons with observations. We present granulation size, convective expansion of the acoustic cavity, and asymptotic adiabat as functions of atmospheric parameters.« less

  9. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  10. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  11. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  12. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    PubMed

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  13. Determining Functional Reliability of Pyrotechnic Mechanical Devices

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Multhaup, Herbert A.

    1997-01-01

    This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.

  14. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    NASA Astrophysics Data System (ADS)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  15. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    NASA Astrophysics Data System (ADS)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  16. The role of test-retest reliability in measuring individual and group differences in executive functioning.

    PubMed

    Paap, Kenneth R; Sawi, Oliver

    2016-12-01

    Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Tinnitus functional index: validation of the German version for Switzerland.

    PubMed

    Peter, Nicole; Kleinjung, Tobias; Jeker, Raphael; Meyer, Martin; Klaghofer, Richard; Weidt, Steffi

    2017-05-05

    Different standardized questionnaires are used to assess tinnitus severity, making comparisons across studies difficult. These questionnaires are also used to measure treatment-related changes in tinnitus although they were not designed for this purpose. To solve these problems, a new questionnaire - the Tinnitus Functional Index (TFI) - has been established. The TFI is highly responsive to treatment-related change and promises to be the new gold standard in tinnitus evaluation. The aim of the current study was to validate a German version of the TFI for a German-speaking population in Switzerland. At the ENT department of the University Hospital Zurich, 264 subjects completed an online survey including the German version for Switzerland of TFI, Tinnitus Handicap Inventory (THI), Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI) and sociodemographic variables. Internal consistency of the TFI was calculated with Cronbach's alpha coefficient. Pearson correlation coefficients were used for the test-retest reliability of the TFI and to investigate convergent and discriminant validity between the THI and the BDI and BAI, respectively. Factor analysis was assessed using a principal component analysis with oblique rotation. The different factors extracted were then compared with the original questionnaire. The German version of the TFI for Switzerland showed an excellent internal consistency (Cronbach's alpha of 0.97) and an excellent test-retest reliability of 0.91. The convergent validity with THI was high (r = 0.86). The discriminant validity with BAI and BDI showed moderate results (BAI: r = 0.60 and BDI: r = 0.65). In the factor analysis only five factors with one main factor could be extracted instead of eight factors as described in the original version. Nevertheless, relations to the original eight subscales could be demonstrated. The German version of the TFI for Switzerland is a suitable instrument for measuring the impact of tinnitus. The reliability and validity of this version are comparable with the original version of the TFI. Although this study showed only five factors in the factor analysis, relations to the original eight subscales were identified. Therefore, the German version of the TFI for Switzerland can deliver relevant information regarding the different tinnitus domains. Clinical trial registration number on clinicaltrial.gov: NCT01837368 .

  18. 76 FR 23801 - North American Electric Reliability Corporation; Order Approving Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... primary control functionality. Nonetheless, until data from drills, exercises and tests can support a... control center becomes inoperable and to conduct reviews and tests, at least annually, to ensure viability... Preparedness and Operations (EOP) Reliability Standard EOP- 008-1 (Loss of Control Center Functionality). The...

  19. Ego-resiliency reloaded: a three-component model of general resiliency.

    PubMed

    Farkas, Dávid; Orosz, Gábor

    2015-01-01

    Ego-resiliency (ER) is a capacity that enables individuals to adapt to constantly changing environmental demands. The goal of our research was to identify components of Ego-resiliency, and to test the reliability and the structural and convergent validity of the refined version of the ER11 Ego-resiliency scale. In Study 1 we used a factor analytical approach to assess structural validity and to identify factors of Ego-resiliency. Comparing alternative factor-structures, a hierarchical model was chosen including three factors: Active Engagement with the World (AEW), Repertoire of Problem Solving Strategies (RPSS), and Integrated Performance under Stress (IPS). In Study 2, the convergent and divergent validity of the ER11 scale and its factors and their relationship with resilience were tested. The results suggested that resiliency is a double-faced construct, with one function to keep the personality system stable and intact, and the other function to adjust the personality system in an adaptive way to the dynamically changing environment. The stability function is represented by the RPSS and IPS components of ER. Their relationship pattern is similar to other constructs of resilience, e.g. the Revised Connor-Davidson Resilience Scale (R-CD-RISC). The flexibility function is represented by the unit of RPSS and AEW components. In Study 3 we tested ER11 on a Hungarian online representative sample and integrated the results in a model of general resiliency. This framework allows us to grasp both the stability-focused and the plasticity-focused nature of resiliency.

  20. Ego-Resiliency Reloaded: A Three-Component Model of General Resiliency

    PubMed Central

    Farkas, Dávid; Orosz, Gábor

    2015-01-01

    Ego-resiliency (ER) is a capacity that enables individuals to adapt to constantly changing environmental demands. The goal of our research was to identify components of Ego-resiliency, and to test the reliability and the structural and convergent validity of the refined version of the ER11 Ego-resiliency scale. In Study 1 we used a factor analytical approach to assess structural validity and to identify factors of Ego-resiliency. Comparing alternative factor-structures, a hierarchical model was chosen including three factors: Active Engagement with the World (AEW), Repertoire of Problem Solving Strategies (RPSS), and Integrated Performance under Stress (IPS). In Study 2, the convergent and divergent validity of the ER11 scale and its factors and their relationship with resilience were tested. The results suggested that resiliency is a double-faced construct, with one function to keep the personality system stable and intact, and the other function to adjust the personality system in an adaptive way to the dynamically changing environment. The stability function is represented by the RPSS and IPS components of ER. Their relationship pattern is similar to other constructs of resilience, e.g. the Revised Connor-Davidson Resilience Scale (R-CD-RISC). The flexibility function is represented by the unit of RPSS and AEW components. In Study 3 we tested ER11 on a Hungarian online representative sample and integrated the results in a model of general resiliency. This framework allows us to grasp both the stability-focused and the plasticity-focused nature of resiliency. PMID:25815881

  1. 25 CFR 12.61 - Can I be paid for information that helps solve a crime?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Can I be paid for information that helps solve a crime? 12.61 Section 12.61 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Support Functions § 12.61 Can I be paid for information that helps solve a crime...

  2. 25 CFR 12.61 - Can I be paid for information that helps solve a crime?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Can I be paid for information that helps solve a crime? 12.61 Section 12.61 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Support Functions § 12.61 Can I be paid for information that helps solve a crime...

  3. Can deficits in social problem-solving in people with personality disorder be reversed?

    PubMed

    Crawford, M J

    2007-04-01

    Research evidence is beginning to emerge that social problem-solving can improve the social functioning of people with personality disorder. This approach is particularly important because it may be relatively easy to train healthcare workers to deliver this intervention. However, the costs and cost-effectiveness of social problem-solving need to be established if it is to be made more widely available.

  4. A Python Program for Solving Schro¨dinger's Equation in Undergraduate Physical Chemistry

    ERIC Educational Resources Information Center

    Srnec, Matthew N.; Upadhyay, Shiv; Madura, Jeffry D.

    2017-01-01

    In undergraduate physical chemistry, Schrödinger's equation is solved for a variety of cases. In doing so, the energies and wave functions of the system can be interpreted to provide connections with the physical system being studied. Solving this equation by hand for a one-dimensional system is a manageable task, but it becomes time-consuming…

  5. 25 CFR 12.61 - Can I be paid for information that helps solve a crime?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false Can I be paid for information that helps solve a crime? 12.61 Section 12.61 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Support Functions § 12.61 Can I be paid for information that helps solve a crime...

  6. 25 CFR 12.61 - Can I be paid for information that helps solve a crime?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true Can I be paid for information that helps solve a crime? 12.61 Section 12.61 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Support Functions § 12.61 Can I be paid for information that helps solve a crime...

  7. 25 CFR 12.61 - Can I be paid for information that helps solve a crime?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Can I be paid for information that helps solve a crime? 12.61 Section 12.61 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER INDIAN COUNTRY LAW ENFORCEMENT Support Functions § 12.61 Can I be paid for information that helps solve a crime...

  8. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement functions because they are the elements of the complete functions for the system under consideration. We extended this idea to solve the relativistic DE and applied it to the hydrogen and helium atoms, without observing any problems such as variational collapse. Thereafter, we obtained very accurate solutions of the SE for the ground and excited states of the Born-Oppenheimer (BO) and non-BO states of very small systems like He, H(2)(+), H(2), and their analogues. For larger systems, however, the overlap and Hamiltonian integrals over the complement functions are not always known mathematically (integration difficulty); therefore we formulated the local SE (LSE) method as an integral-free method. Without any integration, the LSE method gave fairly accurate energies and wave functions for small atoms and molecules. We also calculated continuous potential curves of the ground and excited states of small diatomic molecules by introducing the transferable local sampling method. Although the FC-LSE method is simple, the achievement of chemical accuracy in the absolute energy of larger systems remains time-consuming. The development of more efficient methods for the calculations of ordinary molecules would allow researchers to make these calculations more easily.

  9. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    PubMed

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.

  10. Numerical Solution to Generalized Burgers'-Fisher Equation Using Exp-Function Method Hybridized with Heuristic Computation

    PubMed Central

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858

  11. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    NASA Astrophysics Data System (ADS)

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  12. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  13. Developing of discrimination experiment to find most adequate model of plant’s multi-nutrient functional response

    NASA Astrophysics Data System (ADS)

    Saltykov, M. Yu; Bartsev, S. I.

    2017-02-01

    To create reliable Closed Ecological Life Support Systems (CELSS) it is necessary to have models which can predict CELSS dynamic with good accuracy. However it was shown that conventional ecological models cannot describe CELSS correctly if it is closed by more than one element. This problem can be solved by means more complex models than conventional ones - so called flexible metabolism models. However it is possible that CELSS also can be described correctly in “semi-conventional” framework - when only one trophic level is described by flexible metabolism model. Another problem in CELSS modeling is existence of different and incompatible hypotheses about relationships between plants growth rate and amounts of nutrients (functional responses). Difficulty of testing these hypotheses is associated with multi-nutrient dependency of growth rate and comprehensive experimental studies are expensive and time-consuming. This work is devoted to testing the hypothesis that “semi-conventional” approach is enough to describe CELSS, and to planning the discrimination experiment on selecting correct type of the plant’s functional response. To do that three different models of plants (one flexible and two conventional) were investigated both in the scope of CELSS model, and in hemostat model. Numerical simulations show that each of the models has typical patterns which can be determined in experiment with real plants.

  14. Improving insight and non-insight problem solving with brief interventions.

    PubMed

    Wen, Ming-Ching; Butler, Laurie T; Koutstaal, Wilma

    2013-02-01

    Developing brief training interventions that benefit different forms of problem solving is challenging. In earlier research, Chrysikou (2006) showed that engaging in a task requiring generation of alternative uses of common objects improved subsequent insight problem solving. These benefits were attributed to a form of implicit transfer of processing involving enhanced construction of impromptu, on-the-spot or 'ad hoc' goal-directed categorizations of the problem elements. Following this, it is predicted that the alternative uses exercise should benefit abilities that govern goal-directed behaviour, such as fluid intelligence and executive functions. Similarly, an indirect intervention - self-affirmation (SA) - that has been shown to enhance cognitive and executive performance after self-regulation challenge and when under stereotype threat, may also increase adaptive goal-directed thinking and likewise should bolster problem-solving performance. In Experiment 1, brief single-session interventions, involving either alternative uses generation or SA, significantly enhanced both subsequent insight and visual-spatial fluid reasoning problem solving. In Experiment 2, we replicated the finding of benefits of both alternative uses generation and SA on subsequent insight problem-solving performance, and demonstrated that the underlying mechanism likely involves improved executive functioning. Even brief cognitive- and social-psychological interventions may substantially bolster different types of problem solving and may exert largely similar facilitatory effects on goal-directed behaviours. © 2012 The British Psychological Society.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less

  16. Age-Related Differences in Test-Retest Reliability in Resting-State Brain Functional Connectivity

    PubMed Central

    Song, Jie; Desphande, Alok S.; Meier, Timothy B.; Tudorascu, Dana L.; Vergun, Svyatoslav; Nair, Veena A.; Biswal, Bharat B.; Meyerand, Mary E.; Birn, Rasmus M.; Bellec, Pierre; Prabhakaran, Vivek

    2012-01-01

    Resting-state functional MRI (rs-fMRI) has emerged as a powerful tool for investigating brain functional connectivity (FC). Research in recent years has focused on assessing the reliability of FC across younger subjects within and between scan-sessions. Test-retest reliability in resting-state functional connectivity (RSFC) has not yet been examined in older adults. In this study, we investigated age-related differences in reliability and stability of RSFC across scans. In addition, we examined how global signal regression (GSR) affects RSFC reliability and stability. Three separate resting-state scans from 29 younger adults (18–35 yrs) and 26 older adults (55–85 yrs) were obtained from the International Consortium for Brain Mapping (ICBM) dataset made publically available as part of the 1000 Functional Connectomes project www.nitrc.org/projects/fcon_1000. 92 regions of interest (ROIs) with 5 cubic mm radius, derived from the default, cingulo-opercular, fronto-parietal and sensorimotor networks, were previously defined based on a recent study. Mean time series were extracted from each of the 92 ROIs from each scan and three matrices of z-transformed correlation coefficients were created for each subject, which were then used for evaluation of multi-scan reliability and stability. The young group showed higher reliability of RSFC than the old group with GSR (p-value = 0.028) and without GSR (p-value <0.001). Both groups showed a high degree of multi-scan stability of RSFC and no significant differences were found between groups. By comparing the test-retest reliability of RSFC with and without GSR across scans, we found significantly higher proportion of reliable connections in both groups without GSR, but decreased stability. Our results suggest that aging is associated with reduced reliability of RSFC which itself is highly stable within-subject across scans for both groups, and that GSR reduces the overall reliability but increases the stability in both age groups and could potentially alter group differences of RSFC. PMID:23227153

  17. Atypical Learning in Autism Spectrum Disorders: A Functional Magnetic Resonance Imaging Study of Transitive Inference

    PubMed Central

    Solomon, Marjorie; Ragland, J. Daniel; Niendam, Tara A.; Lesh, Tyler A.; Beck, Jonathan S.; Matter, John C.; Frank, Michael J.; Carter, Cameron S.

    2015-01-01

    Objective To investigate the neural mechanisms underlying impairments in generalizing learning shown by adolescents with autism spectrum disorder (ASD). Method Twenty-one high-functioning individuals with ASD aged 12–18 years, and 23 gender, IQ, and age-matched adolescents with typical development (TYP) completed a transitive inference (TI) task implemented using rapid event-related functional magnetic resonance imaging (fMRI). They were trained on overlapping pairs in a stimulus hierarchy of colored ovals where A>B>C>D>E>F and then tested on generalizing this training to new stimulus pairings (AF, BD, BE) in a “Big Game.” Whole-brain univariate, region of interest, and functional connectivity analyses were used. Results During training, TYP exhibited increased recruitment of the prefrontal cortex (PFC), while the group with ASD showed greater functional connectivity between the PFC and the anterior cingulate cortex (ACC). Both groups recruited the hippocampus and caudate comparably; however, functional connectivity between these regions was positively associated with TI performance for only the group with ASD. During the Big Game, TYP showed greater recruitment of the PFC, parietal cortex, and the ACC. Recruitment of these regions increased with age in the group with ASD. Conclusion During TI, TYP recruited cognitive control-related brain regions implicated in mature problem solving/reasoning including the PFC, parietal cortex, and ACC, while the group with ASD showed functional connectivity of the hippocampus and the caudate that was associated with task performance. Failure to reliably engage cognitive control-related brain regions may produce less integrated flexible learning in those with ASD unless they are provided with task support that in essence provides them with cognitive control, but this pattern may normalize with age. PMID:26506585

  18. Building New Bridges between In Vitro and In Vivo in Early Drug Discovery: Where Molecular Modeling Meets Systems Biology.

    PubMed

    Pearlstein, Robert A; McKay, Daniel J J; Hornak, Viktor; Dickson, Callum; Golosov, Andrei; Harrison, Tyler; Velez-Vega, Camilo; Duca, José

    2017-01-01

    Cellular drug targets exist within networked function-generating systems whose constituent molecular species undergo dynamic interdependent non-equilibrium state transitions in response to specific perturbations (i.e.. inputs). Cellular phenotypic behaviors are manifested through the integrated behaviors of such networks. However, in vitro data are frequently measured and/or interpreted with empirical equilibrium or steady state models (e.g. Hill, Michaelis-Menten, Briggs-Haldane) relevant to isolated target populations. We propose that cells act as analog computers, "solving" sets of coupled "molecular differential equations" (i.e. represented by populations of interacting species)via "integration" of the dynamic state probability distributions among those populations. Disconnects between biochemical and functional/phenotypic assays (cellular/in vivo) may arise with targetcontaining systems that operate far from equilibrium, and/or when coupled contributions (including target-cognate partner binding and drug pharmacokinetics) are neglected in the analysis of biochemical results. The transformation of drug discovery from a trial-and-error endeavor to one based on reliable design criteria depends on improved understanding of the dynamic mechanisms powering cellular function/dysfunction at the systems level. Here, we address the general mechanisms of molecular and cellular function and pharmacological modulation thereof. We outline a first principles theory on the mechanisms by which free energy is stored and transduced into biological function, and by which biological function is modulated by drug-target binding. We propose that cellular function depends on dynamic counter-balanced molecular systems necessitated by the exponential behavior of molecular state transitions under non-equilibrium conditions, including positive versus negative mass action kinetics and solute-induced perturbations to the hydrogen bonds of solvating water versus kT. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Mathematical learning disabilities and attention deficit and/or hyperactivity disorder: A study of the cognitive processes involved in arithmetic problem solving.

    PubMed

    Iglesias-Sarmiento, Valentín; Deaño, Manuel; Alfonso, Sonia; Conde, Ángeles

    2017-02-01

    The purpose of this study was to examine the contribution of cognitive functioning to arithmetic problem solving and to explore the cognitive profiles of children with attention deficit and/or hyperactivity disorder (ADHD) and with mathematical learning disabilities (MLD). The sample was made up of a total of 90 students of 4th, 5th, and 6th grade organized in three: ADHD (n=30), MLD (n=30) and typically achieving control (TA; n=30) group. Assessment was conducted in two sessions in which the PASS processes and arithmetic problem solving were evaluated. The ADHD group's performance in planning and attention was worse than that of the control group. Children with MLD obtained poorer results than the control group in planning and simultaneous and successive processing. Executive processes predicted arithmetic problem solving in the ADHD group whereas simultaneous processing was the unique predictor in the MLD sample. Children with ADHD and with MLD showed characteristic cognitive profiles. Groups' problem-solving performance can be predicted from their cognitive functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Analysis of electric power industry restructuring

    NASA Astrophysics Data System (ADS)

    Al-Agtash, Salem Yahya

    1998-10-01

    This thesis evaluates alternative structures of the electric power industry in a competitive environment. One structure is based on the principle of creating a mandatory power pool to foster competition and manage system economics. The structure is PoolCo (pool coordination). A second structure is based on the principle of allowing independent multilateral trading and decentralized market coordination. The structure is DecCo (decentralized coordination). The criteria I use to evaluate these two structures are: economic efficiency, system reliability and freedom of choice. Economic efficiency evaluation considers strategic behavior of individual generators as well as behavioral variations of different classes of consumers. A supply-function equilibria model is characterized for deriving bidding strategies of competing generators under PoolCo. It is shown that asymmetric equilibria can exist within the capacities of generators. An augmented Lagrangian approach is introduced to solve iteratively for global optimal operations schedules. Under DecCo, the process involves solving iteratively for system operations schedules. The schedules reflect generators strategic behavior and brokers' interactions for arranging profitable trades, allocating losses and managing network congestion. In the determination of PoolCo and DecCo operations schedules, overall costs of power generation (start-up and shut-down costs and availability of hydro electric power) as well as losses and costs of transmission network are considered. For system reliability evaluation, I examine the effect of PoolCo and DecCo operating conditions on the system security. Random component failure perturbations are generated to simulate the actual system behavior. This is done using Monte Carlo simulation. Freedom of choice evaluation accounts for schemes' beneficial opportunities and capabilities to respond to consumers expressed preferences. An IEEE 24-bus test system is used to illustrate the concepts developed for economic efficiency evaluation. The system was tested over two years time period. The results indicate 2.6684 and 2.7269 percent of efficiency loss on average for PoolCo and DecCo, respectively. These values, however, do not represent forecasts of efficiency losses of PoolCo- and DecCo-based competitive industries. Rather, they are illustrations of the efficiency losses for the given IEEE test system and based on the modeling assumptions underlying framework development.

  1. Methods for Calculating Frequency of Maintenance of Complex Information Security System Based on Dynamics of Its Reliability

    NASA Astrophysics Data System (ADS)

    Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.

    2017-11-01

    This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.

  2. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    PubMed

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  3. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  4. Automated Design of a High-Velocity Channel

    DTIC Science & Technology

    2006-05-01

    using Newton’s method. 2.2.2 Groundwater Applications Optimization methods are also very useful for solving groundwater problems. Townley et al... Townley 85] apply present computational algorithms to steady and transient models for groundwater °ow. The aquifer storage coe±cients, transmissivities...Reliability Analysis", Water Resources Research, Vol. 28, No. 12, December 1992, pp. 3269-3280. [ Townley 85] Townley , L. R. and Wilson, J. L

  5. Comparing Models of Helper Behavior to Actual Practice in Telephone Crisis Intervention: A Silent Monitoring Study of Calls to the U.S. 1-800-SUICIDE Network

    ERIC Educational Resources Information Center

    Mishara, Brian L.; Chagnon, Francois; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cecile; Campbell, Julie K.; Berman, Alan

    2007-01-01

    Models of telephone crisis intervention in suicide prevention and best practices were developed from a literature review and surveys of crisis centers. We monitored 2,611 calls to 14 centers using reliable behavioral ratings to compare actual interventions with the models. Active listening and collaborative problem-solving models describe help…

  6. Reliability of Next Generation Power Electronics Packaging Under Concurrent Vibration, Thermal and High Power Loads

    DTIC Science & Technology

    2008-02-01

    combined thermal g effect and initial current field. The model is implemented using Abaqus user element subroutine and verified against the experimental...Finite Element Formulation The proposed model is implemented with ABAQUS general purpose finite element program using thermal -displacement analysis...option. ABAQUS and other commercially available finite element codes do not have the capability to solve general electromigration problem directly. Thermal

  7. On the reliable and flexible solution of practical subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.

  8. Modelling the aggregation process of cellular slime mold by the chemical attraction.

    PubMed

    Atangana, Abdon; Vermeulen, P D

    2014-01-01

    We put into exercise a comparatively innovative analytical modus operandi, the homotopy decomposition method (HDM), for solving a system of nonlinear partial differential equations arising in an attractor one-dimensional Keller-Segel dynamics system. Numerical solutions are given and some properties show evidence of biologically practical reliance on the parameter values. The reliability of HDM and the reduction in computations give HDM a wider applicability.

  9. Use of remote sensing for land use policy formulation

    NASA Technical Reports Server (NTRS)

    Boylan, M.; Vlasin, R. D.

    1976-01-01

    Uses of remote sensing imagery were investigated based on exploring and evaluating the capability and reliability of all kinds of imagery for improving decision making on issues of land use at all scales of governmental administration. Emphasis was placed on applications to solving immediate problems confronting public agencies and private organizations. Resulting applications of remote sensing use by public agencies, public organizations, and related private corporations are described.

  10. Finite-element simulation of ceramic drying processes

    NASA Astrophysics Data System (ADS)

    Keum, Y. T.; Jeong, J. H.; Auh, K. H.

    2000-07-01

    A finite-element simulation for the drying process of ceramics is performed. The heat and moisture movements in green ceramics caused by the temperature gradient, moisture gradient, conduction, convection and evaporation are considered. The finite-element formulation for solving the temperature and moisture distributions, which not only change the volume but also induce the hygro-thermal stress, is carried out. Employing the internally discontinuous interface elements, the numerical divergence problem arising from sudden changes in heat capacity in the phase zone is solved. In order to verify the reliability of the formulation, the drying process of a coal and the wetting process of a graphite epoxy are simulated and the results are compared with the analytical solution and another investigator's result. Finally, the drying process of a ceramic electric insulator is simulated.

  11. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  12. Meta-Reasoning: Monitoring and Control of Thinking and Reasoning.

    PubMed

    Ackerman, Rakefet; Thompson, Valerie A

    2017-08-01

    Meta-Reasoning refers to the processes that monitor the progress of our reasoning and problem-solving activities and regulate the time and effort devoted to them. Monitoring processes are usually experienced as feelings of certainty or uncertainty about how well a process has, or will, unfold. These feelings are based on heuristic cues, which are not necessarily reliable. Nevertheless, we rely on these feelings of (un)certainty to regulate our mental effort. Most metacognitive research has focused on memorization and knowledge retrieval, with little attention paid to more complex processes, such as reasoning and problem solving. In that context, we recently developed a Meta-Reasoning framework, used here to review existing findings, consider their consequences, and frame questions for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. On the comparison of perturbation-iteration algorithm and residual power series method to solve fractional Zakharov-Kuznetsov equation

    NASA Astrophysics Data System (ADS)

    Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei

    2018-06-01

    In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.

  14. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  15. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  16. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  17. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  18. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  19. Bee Inspired Novel Optimization Algorithm and Mathematical Model for Effective and Efficient Route Planning in Railway System

    PubMed Central

    Leong, Kah Huo; Abdul-Rahman, Hamzah; Wang, Chen; Onn, Chiu Chuen

    2016-01-01

    Railway and metro transport systems (RS) are becoming one of the popular choices of transportation among people, especially those who live in urban cities. Urbanization and increasing population due to rapid development of economy in many cities are leading to a bigger demand for urban rail transit. Despite being a popular variant of Traveling Salesman Problem (TSP), it appears that the universal formula or techniques to solve the problem are yet to be found. This paper aims to develop an optimization algorithm for optimum route selection to multiple destinations in RS before returning to the starting point. Bee foraging behaviour is examined to generate a reliable algorithm in railway TSP. The algorithm is then verified by comparing the results with the exact solutions in 10 test cases, and a numerical case study is designed to demonstrate the application with large size sample. It is tested to be efficient and effective in railway route planning as the tour can be completed within a certain period of time by using minimal resources. The findings further support the reliability of the algorithm and capability to solve the problems with different complexity. This algorithm can be used as a method to assist business practitioners making better decision in route planning. PMID:27930659

  20. Bee Inspired Novel Optimization Algorithm and Mathematical Model for Effective and Efficient Route Planning in Railway System.

    PubMed

    Leong, Kah Huo; Abdul-Rahman, Hamzah; Wang, Chen; Onn, Chiu Chuen; Loo, Siaw-Chuing

    2016-01-01

    Railway and metro transport systems (RS) are becoming one of the popular choices of transportation among people, especially those who live in urban cities. Urbanization and increasing population due to rapid development of economy in many cities are leading to a bigger demand for urban rail transit. Despite being a popular variant of Traveling Salesman Problem (TSP), it appears that the universal formula or techniques to solve the problem are yet to be found. This paper aims to develop an optimization algorithm for optimum route selection to multiple destinations in RS before returning to the starting point. Bee foraging behaviour is examined to generate a reliable algorithm in railway TSP. The algorithm is then verified by comparing the results with the exact solutions in 10 test cases, and a numerical case study is designed to demonstrate the application with large size sample. It is tested to be efficient and effective in railway route planning as the tour can be completed within a certain period of time by using minimal resources. The findings further support the reliability of the algorithm and capability to solve the problems with different complexity. This algorithm can be used as a method to assist business practitioners making better decision in route planning.

  1. Translation, Validation, and Reliability of the Dutch Late-Life Function and Disability Instrument Computer Adaptive Test.

    PubMed

    Arensman, Remco M; Pisters, Martijn F; de Man-van Ginkel, Janneke M; Schuurmans, Marieke J; Jette, Alan M; de Bie, Rob A

    2016-09-01

    Adequate and user-friendly instruments for assessing physical function and disability in older adults are vital for estimating and predicting health care needs in clinical practice. The Late-Life Function and Disability Instrument Computer Adaptive Test (LLFDI-CAT) is a promising instrument for assessing physical function and disability in gerontology research and clinical practice. The aims of this study were: (1) to translate the LLFDI-CAT to the Dutch language and (2) to investigate its validity and reliability in a sample of older adults who spoke Dutch and dwelled in the community. For the assessment of validity of the LLFDI-CAT, a cross-sectional design was used. To assess reliability, measurement of the LLFDI-CAT was repeated in the same sample. The item bank of the LLFDI-CAT was translated with a forward-backward procedure. A sample of 54 older adults completed the LLFDI-CAT, World Health Organization Disability Assessment Schedule 2.0, RAND 36-Item Short-Form Health Survey physical functioning scale (10 items), and 10-Meter Walk Test. The LLFDI-CAT was repeated in 2 to 8 days (mean=4.5 days). Pearson's r and the intraclass correlation coefficient (ICC) (2,1) were calculated to assess validity, group-level reliability, and participant-level reliability. A correlation of .74 for the LLFDI-CAT function scale and the RAND 36-Item Short-Form Health Survey physical functioning scale (10 items) was found. The correlations of the LLFDI-CAT disability scale with the World Health Organization Disability Assessment Schedule 2.0 and the 10-Meter Walk Test were -.57 and -.53, respectively. The ICC (2,1) of the LLFDI-CAT function scale was .84, with a group-level reliability score of .85. The ICC (2,1) of the LLFDI-CAT disability scale was .76, with a group-level reliability score of .81. The high percentage of women in the study and the exclusion of older adults with recent joint replacement or hospitalization limit the generalizability of the results. The Dutch LLFDI-CAT showed strong validity and high reliability when used to assess physical function and disability in older adults dwelling in the community. © 2016 American Physical Therapy Association.

  2. A Novel Analog Reasoning Paradigm: New Insights in Intellectually Disabled Patients.

    PubMed

    Curie, Aurore; Brun, Amandine; Cheylus, Anne; Reboul, Anne; Nazir, Tatjana; Bussy, Gérald; Delange, Karine; Paulignan, Yves; Mercier, Sandra; David, Albert; Marignier, Stéphanie; Merle, Lydie; de Fréminville, Bénédicte; Prieur, Fabienne; Till, Michel; Mortemousque, Isabelle; Toutain, Annick; Bieth, Eric; Touraine, Renaud; Sanlaville, Damien; Chelly, Jamel; Kong, Jian; Ott, Daniel; Kassai, Behrouz; Hadjikhani, Nouchine; Gollub, Randy L; des Portes, Vincent

    2016-01-01

    Intellectual Disability (ID) is characterized by deficits in intellectual functions such as reasoning, problem-solving, planning, abstract thinking, judgment, and learning. As new avenues are emerging for treatment of genetically determined ID (such as Down's syndrome or Fragile X syndrome), it is necessary to identify objective reliable and sensitive outcome measures for use in clinical trials. We developed a novel visual analogical reasoning paradigm, inspired by the Progressive Raven's Matrices, but appropriate for Intellectually Disabled patients. This new paradigm assesses reasoning and inhibition abilities in ID patients. We performed behavioural analyses for this task (with a reaction time and error rate analysis, Study 1) in 96 healthy controls (adults and typically developed children older than 4) and 41 genetically determined ID patients (Fragile X syndrome, Down syndrome and ARX mutated patients). In order to establish and quantify the cognitive strategies used to solve the task, we also performed an eye-tracking analysis (Study 2). Down syndrome, ARX and Fragile X patients were significantly slower and made significantly more errors than chronological age-matched healthy controls. The effect of inhibition on error rate was greater than the matrix complexity effect in ID patients, opposite to findings in adult healthy controls. Interestingly, ID patients were more impaired by inhibition than mental age-matched healthy controls, but not by the matrix complexity. Eye-tracking analysis made it possible to identify the strategy used by the participants to solve the task. Adult healthy controls used a matrix-based strategy, whereas ID patients used a response-based strategy. Furthermore, etiologic-specific reasoning differences were evidenced between ID patients groups. We suggest that this paradigm, appropriate for ID patients and developmental populations as well as adult healthy controls, provides an objective and quantitative assessment of visual analogical reasoning and cognitive inhibition, enabling testing for the effect of pharmacological or behavioural intervention in these specific populations.

  3. Effects of resting state condition on reliability, trait specificity, and network connectivity of brain function measured with arterial spin labeled perfusion MRI.

    PubMed

    Li, Zhengjun; Vidorreta, Marta; Katchmar, Natalie; Alsop, David C; Wolf, Daniel H; Detre, John A

    2018-06-01

    Resting state fMRI (rs-fMRI) provides imaging biomarkers of task-independent brain function that can be associated with clinical variables or modulated by interventions such as behavioral training or pharmacological manipulations. These biomarkers include time-averaged regional brain function as manifested by regional cerebral blood flow (CBF) measured using arterial spin labeled (ASL) perfusion MRI and correlated temporal fluctuations of function across brain networks with either ASL or blood oxygenation level dependent (BOLD) fMRI. Resting-state studies are typically carried out using just one of several prescribed state conditions such as eyes closed (EC), eyes open (EO), or visual fixation on a cross-hair (FIX), which may affect the reliability and specificity of rs-fMRI. In this study, we collected test-retest ASL MRI data during 4 resting-state task conditions: EC, EO, FIX and PVT (low-frequency psychomotor vigilance task), and examined the effects of these task conditions on reliability and reproducibility as well as trait specificity of regional brain function. We also acquired resting-state BOLD fMRI under FIX and compared the network connectivity reliabilities between the four ASL conditions and the BOLD FIX condition. For resting-state ASL data, EC provided the highest CBF reliability, reproducibility, trait specificity, and network connectivity reliability, followed by EO, while FIX was lowest on all of these measures. PVT demonstrated lower CBF reliability, reproducibility and trait specificity than EO and EC. Overall network connectivity reliability was comparable between ASL and BOLD. Our findings confirm ASL CBF as a reliable, stable, and consistent measure of resting-state regional brain function and support the use of EC or EO over FIX and PVT as the resting-state condition. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Ill-defined problem solving in amnestic mild cognitive impairment: linking episodic memory to effective solution generation.

    PubMed

    Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M

    2015-02-01

    It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Development of confidence limits by pivotal functions for estimating software reliability

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.

  6. Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization

    DTIC Science & Technology

    2017-06-01

    solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end

  7. Self Esteem, Information Search and Problem Solving Efficiency.

    DTIC Science & Technology

    1979-05-01

    Weiss (1977, 1978) has shown that low self esteem workers are more likely to model the role behaviors and work values of superiors than are high self ...task where search is functional. Results showed that, as expected, low self esteem subjects searched for more information, search was functional and low ...situation. He has also argued that high self esteem individuals search for less information on problem solving tasks and are therefore less likely to

  8. Exact solution of the hidden Markov processes.

    PubMed

    Saakian, David B

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M-1.

  9. Exact solution of the hidden Markov processes

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  10. Stability-Aware Geographic Routing in Energy Harvesting Wireless Sensor Networks

    PubMed Central

    Hieu, Tran Dinh; Dung, Le The; Kim, Byung-Seo

    2016-01-01

    A new generation of wireless sensor networks that harvest energy from environmental sources such as solar, vibration, and thermoelectric to power sensor nodes is emerging to solve the problem of energy limitation. Based on the photo-voltaic model, this research proposes a stability-aware geographic routing for reliable data transmissions in energy-harvesting wireless sensor networks (EH-WSNs) to provide a reliable routes selection method and potentially achieve an unlimited network lifetime. Specifically, the influences of link quality, represented by the estimated packet reception rate, on network performance is investigated. Simulation results show that the proposed method outperforms an energy-harvesting-aware method in terms of energy consumption, the average number of hops, and the packet delivery ratio. PMID:27187414

  11. Designing for Reliability and Robustness in International Space Station Exercise Countermeasures Systems

    NASA Technical Reports Server (NTRS)

    Moore, Cherice; Svetlik, Randall; Williams, Antony

    2017-01-01

    As spaceflight durations have increased over the last four decades, the effects of microgravity on the human body have become far better understood, as have the exercise countermeasures. Through use of a combination of aerobic and resistive exercise devices, today's astronauts and cosmonauts are able to partially counter the losses in muscle strength, aerobic fitness, and bone strength that otherwise might occur during their missions on the International Space Station (ISS). Since 2000, the ISS has employed a variety of exercise equipment used as countermeasures to these risks. Providing reliable and available exercise systems has presented significant challenges due to the unique environment. In solving these, lessons have been learned that can inform development of future systems.

  12. LIMEPY: Lowered Isothermal Model Explorer in PYthon

    NASA Astrophysics Data System (ADS)

    Gieles, Mark; Zocchi, Alice

    2017-10-01

    LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

  13. 75 FR 29962 - Special Conditions: Cirrus Design Corporation Model SF50 Airplane; Function and Reliability Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... SF50 airplanes. 1. Function and Reliability Testing Flight tests: In place of 14 CFR part 21.35(b)(2... Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed special..., 1951, and deleted the service test requirements in Section 3.19 for airplanes of 6,000 pounds maximum...

  14. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    PubMed Central

    Li, Mengmeng; Feng, Qiang; Yang, Dezhen

    2018-01-01

    In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion. PMID:29584695

  15. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  16. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  17. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  18. Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) Collocation Method for Solving Linear and Nonlinear Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.

    2018-05-01

    In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.

  19. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  20. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

Top