Assessing the Quality of Problems in Problem-Based Learning
ERIC Educational Resources Information Center
Sockalingam, Nachamma; Rotgans, Jerome; Schmidt, Henk
2012-01-01
This study evaluated the construct validity and reliability of a newly devised 32-item problem quality rating scale intended to measure the quality of problems in problem-based learning. The rating scale measured the following five characteristics of problems: the extent to which the problem (1) leads to learning objectives, (2) is familiar, (3)…
An unbalanced spectra classification method based on entropy
NASA Astrophysics Data System (ADS)
Liu, Zhong-bao; Zhao, Wen-juan
2017-05-01
How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.
When Should Zero Be Included on a Scale Showing Magnitude?
ERIC Educational Resources Information Center
Kozak, Marcin
2011-01-01
This article addresses an important problem of graphing quantitative data: should one include zero on the scale showing magnitude? Based on a real time series example, the problem is discussed and some recommendations are proposed.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Penders, Bart; Vos, Rein; Horstman, Klasien
2009-11-01
Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.
Mental health problems in Kosovar adolescents: results from a national mental health survey.
Shahini, Mimoza; Rescorla, Leslie; Wancata, Johannes; Ahmeti, Adelina
2015-01-01
Our purpose was to determine the effects of gender and age on Kosovar YSR scores and the prevalence of self-reported behavioral/emotional problems in Kosovar adolescents based on scores above a cutpoint. Participants were 1351 adolescents recruited from secondary schools in seven regions of Kosova who completed the Youth Self-Report. The oldest adolescents had the highest scores on many YSR scales. Although Kosova's mean problems scores were not elevated relative to international norms, the percentage of adolescents scoring in the deviant range (borderline + clinical) was much higher than expected for almost all YSR problem scales, including Total Problems (31.2%), Internalizing (40.8%), and Externalizing (23.4%). The 23% prevalence of elevated scores on Stress Problems was triple the expected 7% prevalence based on a 93rd percentile cutpoint. Results revealed much higher prevalence of psychopathology than would be expected based on international norms, with 25-40% of Kosovar adolescents scoring in the deviant range on YSR scales, Thus, our research indicates a need for expanding psychiatry services to meet the pressing mental health needs of Kosovar adolescents as well as the importance of considering mental health problems in their social context.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Comparing Students' Attitudes in Problem-Based and Conventional Curricula.
ERIC Educational Resources Information Center
Kaufman, David M.; Mann, Karen V.
1996-01-01
A survey of 2 medical school classes at Dalhousie University (Canada) compared student attitudes toward the conventional (n=57 students) and problem-based (n=73) curricula. Students in the problem-based group had more positive attitudes toward the learning environment and curriculum, but were less positive on a student-interaction scale. No…
The Study of Adopting Problem Based Learning in Normal Scale Class Course Design
ERIC Educational Resources Information Center
Hsu, Chia-ling
2014-01-01
This study adopts the Problem Based Learning (PBL) for pre-service teachers in teacher education program. The reasons to adopt PBL are the class scale is not a small class, the contents are too many to teach, and the technologies are ready to be used in classroom. This study used an intermediary, movie, for scenario to student to define the…
Shin, Min-Sup; Jeon, Hyejin; Kim, Miyoung; Hwang, Taeho; Oh, Seo Jin; Hwangbo, Minsu; Kim, Ki Joong
2016-05-01
We sought to determine whether smart-tablet-based neurofeedback could improve executive function-including attention, working memory, and self-regulation-in children with attention problems. Forty children (10-12 years old) with attention problems, as determined by ratings on the Conners Parent Rating Scale, were assigned to either a neurofeedback group that received 16 sessions or a control group. A comprehensive test battery that assessed general intelligence, visual and auditory attention, attentional shifting, response inhibition and behavior rating scales were administered to both groups before neurofeedback training. Several neuropsychological tests were conducted at posttraining and follow-up assessment. Scores on several neuropsychological tests and parent behavior rating scales showed significant improvement in the training group but not in the controls. The improvements remained through the follow-up assessment. This study suggests that the smart-tablet-based neurofeedback training program might improve cognitive function in children with attention problems. © The Author(s) 2015.
Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk
2014-01-01
In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Nishio, Midori; Ono, Mitsu
2015-01-01
The number of male caregivers has increased, but male caregivers face several problems that reduce their quality of life and psychological condition. This study focused on the coping problems of men who care for people with dementia at home. It aimed to develop a coping scale for male caregivers so that they can continue caring for people with dementia at home and improve their own quality of life. The study also aimed to verify the reliability and validity of the scale. The subjects were 759 men who care for people with dementia at home. The Care Problems Coping Scale consists of 21 questions based on elements of questions extracted from a pilot study. Additionally, subjects completed three self-administered questionnaires: the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and the Self-esteem Emotional Scale, and Rosenberg Self-Esteem Scale. There were 274 valid responses (36.1% response rate). Regarding the answer distribution, each average value of the 21 items ranged from 1.56 to 2.68. The median answer distribution of the 21 items was 39 (SD = 6.6). Five items had a ceiling effect, and two items had a floor effect. The scale stability was about 50%, and Cronbach's α was 0.49. There were significant correlations between the Care Problems Coping Scale and total scores of the Japanese version of the Zarit Caregiver Burden Scale, the Depressive Symptoms and Self-esteem Emotional Scale, and the Rosenberg Self-Esteem Scale. The answers provided on the Care Problems Coping Scale questionnaire indicated that male caregivers experience care problems. In terms of validity, there were significant correlations between the external questionnaires and 19 of the 21 items in this scale. This scale can therefore be used to measure problems with coping for male caregivers who care for people with dementia at home.
Davies, Jim; Michaelian, Kourken
2016-08-01
This article argues for a task-based approach to identifying and individuating cognitive systems. The agent-based extended cognition approach faces a problem of cognitive bloat and has difficulty accommodating both sub-individual cognitive systems ("scaling down") and some supra-individual cognitive systems ("scaling up"). The standard distributed cognition approach can accommodate a wider variety of supra-individual systems but likewise has difficulties with sub-individual systems and faces the problem of cognitive bloat. We develop a task-based variant of distributed cognition designed to scale up and down smoothly while providing a principled means of avoiding cognitive bloat. The advantages of the task-based approach are illustrated by means of two parallel case studies: re-representation in the human visual system and in a biomedical engineering laboratory.
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark
ERIC Educational Resources Information Center
Ruby, Susan F.; Crosby-Cooper, Tricia; Vanderwood, Michael L.
2011-01-01
With national attention on scaling up the implementation of Response to Intervention, problem solving teams remain one of the central components for development, implementation, and monitoring of school-based interventions. Studies have shown that problem solving teams evidence a sound theoretical base and demonstrated efficacy; however, limited…
Achenbach, Thomas M; Ivanova, Masha Y; Rescorla, Leslie A
2017-11-01
Originating in the 1960s, the Achenbach System of Empirically Based Assessment (ASEBA) comprises a family of instruments for assessing problems and strengths for ages 1½-90+ years. To provide an overview of the ASEBA, related research, and future directions for empirically based assessment and taxonomy. Standardized, multi-informant ratings of transdiagnostic dimensions of behavioral, emotional, social, and thought problems are hierarchically scored on narrow-spectrum syndrome scales, broad-spectrum internalizing and externalizing scales, and a total problems (general psychopathology) scale. DSM-oriented and strengths scales are also scored. The instruments and scales have been iteratively developed from assessments of clinical and population samples of hundreds of thousands of individuals. Items, instruments, scales, and norms are tailored to different kinds of informants for ages 1½-5, 6-18, 18-59, and 60-90+ years. To take account of differences between informants' ratings, parallel instruments are completed by parents, teachers, youths, adult probands, and adult collaterals. Syndromes and Internalizing/Externalizing scales derived from factor analyses of each instrument capture variations in patterns of problems that reflect different informants' perspectives. Confirmatory factor analyses have supported the syndrome structures in dozens of societies. Software displays scale scores in relation to user-selected multicultural norms for the age and gender of the person being assessed, according to ratings by each type of informant. Multicultural norms are derived from population samples in 57 societies on every inhabited continent. Ongoing and future research includes multicultural assessment of elders; advancing transdiagnostic progress and outcomes assessment; and testing higher order structures of psychopathology. Copyright © 2017 Elsevier Inc. All rights reserved.
Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.
Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre
2006-01-01
Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.
Problem-Based Teaching and Learning in Technology Education.
ERIC Educational Resources Information Center
Putnam, A. R.
Research on how the brain works has resulted in wider-scale adoption of the principles of problem-based learning (PBL) in many areas of education, including technology education. The PBL approach is attractive to curriculum developers because it is based on interdisciplinary learning, results in multiple outcomes, is integrated and…
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-01-01
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-02-23
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.
Solving the flatness problem with an anisotropic instanton in Hořava-Lifshitz gravity
NASA Astrophysics Data System (ADS)
Bramberger, Sebastian F.; Coates, Andrew; Magueijo, João; Mukohyama, Shinji; Namba, Ryo; Watanabe, Yota
2018-02-01
In Hořava-Lifshitz gravity a scaling isotropic in space but anisotropic in spacetime, often called "anisotropic scaling," with the dynamical critical exponent z =3 , lies at the base of its renormalizability. This scaling also leads to a novel mechanism of generating scale-invariant cosmological perturbations, solving the horizon problem without inflation. In this paper we propose a possible solution to the flatness problem, in which we assume that the initial condition of the Universe is set by a small instanton respecting the same scaling. We argue that the mechanism may be more general than the concrete model presented here. We rely simply on the deformed dispersion relations of the theory, and on equipartition of the various forms of energy at the starting point.
Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method
Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter
2017-01-01
An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
[Modeling continuous scaling of NDVI based on fractal theory].
Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng
2013-07-01
Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.
Andersen, Randi Dovland; Jylli, Leena; Ambuel, Bruce
2014-06-01
There is little empirical evidence regarding the translation and cultural adaptation of self-report and observational outcome measures. Studies that evaluate and further develop existing practices are needed. This study explores the use of cognitive interviews in the translation and cultural adaptation of observational measures, using the COMFORT behavioral scale as an example, and demonstrates a structured approach to the analysis of data from cognitive interviews. The COMFORT behavioral scale is developed for assessment of distress and pain in a pediatric intensive care setting. Qualitative, descriptive methodological study. One general public hospital trust in southern Norway. N=12. Eight nurses, three physicians and one nurse assistant, from different wards and with experience caring for children. We translated the COMFORT behavior scale into Norwegian before conducting individual cognitive interviews. Participants first read and then used the translated version of the COMFORT behavioral scale to assess pain based on a 3-min film vignette depicting an infant in pain/distress. Two cognitive interview techniques were applied: Thinking Aloud (TA) during the assessment and Verbal Probing (VP) afterwards. In TA the participant verbalized his/her thought process while completing the COMFORT behavioral scale. During VP the participant responded to specific questions related to understanding of the measure, information recall and the decision process. We audio recorded, transcribed and analyzed interviews using a structured qualitative method (cross-case analysis based on predefined categories and development of a results matrix). Our analysis revealed two categories of problems: (1) Scale problems, warranting a change in the wording of the scale, including (a) translation errors, (b) content not understood as intended, and (c) differences between the original COMFORT scale and the revised COMFORT behavioral scale; and (2) Rater-context problems caused by (a) unfamiliarity with the scale, (b) lack of knowledge and experience, and (c) assessments based on a film vignette. Cognitive interviews revealed problems with both the translated and the original versions of the scale and suggested solutions that enhanced the validity of both versions. Cognitive interviews might be seen as a complement to current published best practices for translation and cultural adaptation. Copyright © 2013 Elsevier Ltd. All rights reserved.
SUSY’s Ladder: Reframing sequestering at Large Volume
Reece, Matthew; Xue, Wei
2016-04-07
Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
A modified priority list-based MILP method for solving large-scale unit commitment problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Xinda; Lu, Ning; Wu, Di
This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less
Predictive and Incremental Validity of Global and Domain-Based Adolescent Life Satisfaction Reports
ERIC Educational Resources Information Center
Haranin, Emily C.; Huebner, E. Scott; Suldo, Shannon M.
2007-01-01
Concurrent, predictive, and incremental validity of global and domain-based adolescent life satisfaction reports are examined with respect to internalizing and externalizing behavior problems. The Students' Life Satisfaction Scale (SLSS), Multidimensional Students' Life Satisfaction Scale (MSLSS), and measures of internalizing and externalizing…
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Solutions to pervasive environmental problems often are not amenable to a straightforward application of science-based actions. These problems encompass large-scale environmental policy questions where environmental concerns, economic constraints, and societal values conflict ca...
Evaluation of Complex Human Performance: The Promise of Computer-Based Simulation
ERIC Educational Resources Information Center
Newsom, Robert S.; And Others
1978-01-01
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
DOT National Transportation Integrated Search
2008-12-01
PROBLEM: The full-scale accelerated pavement testing (APT) provides a unique tool for pavement : engineers to directly collect pavement performance and failure data under heavy : wheel loading. However, running a full-scale APT experiment is very exp...
Discriminant of validity the Wender Utah rating scale in Iranian adults.
Farokhzadi, Farideh; Mohammadi, Mohammad Reza; Salmanian, Maryam
2014-01-01
The aim of this study is the normalization of the Wender Utah rating scale which is used to detect adults with Attention-Deficit and Hyperactivity Disorder (ADHD). Available sampling method was used to choose 400 parents of children (200 parents of children with ADHD as compared to 200 parents of normal children). Wender Utah rating scale, which has been designed to diagnose ADHD in adults, is filled out by each of the parents to most accurately diagnose of ADHD in parents. Wender Utah rating scale was divided into 6 sub scales which consist of dysthymia, oppositional defiant disorder; school work problems, conduct disorder, anxiety, and ADHD were analyzed with exploratory factor analysis method. The value of (Kaiser-Meyer-Olkin) KMO was 86.5% for dysthymia, 86.9% for oppositional defiant disorder, 77.5% for school related problems, 90.9% for conduct disorder, 79.6% for anxiety and 93.5% for Attention deficit/hyperactivity disorder, also the chi square value based on Bartlett's Test was 2242.947 for dysthymia, 2239.112 for oppositional defiant disorder, 1221.917 for school work problems, 5031.511 for conduct, 1421.1 for anxiety, and 7644.122 for ADHD. Since mentioned values were larger than the chi square critical values (P<0.05), it found that the factor correlation matrix is appropriate for factor analysis. Based on the findings, we can conclude that Wender Utah rating scale can be appropriately used for predicting dysthymia, oppositional defiant disorder, school work problems, conduct disorder, anxiety, in adults with ADHD.
ERIC Educational Resources Information Center
Gresham, Frank M.; Elliott, Stephen N.; Kettler, Ryan J.
2010-01-01
Base rate information is important in clinical assessment because one cannot know how unusual or typical a phenomenon is without first knowing its base rate in the population. This study empirically determined the base rates of social skills acquisition and performance deficits, social skills strengths, and problem behaviors using a nationally…
Behavioural problems and autism in children with hydrocephalus : a population-based study.
Lindquist, Barbro; Carlsson, Göran; Persson, Eva-Karin; Uvebrant, Paul
2006-06-01
To investigate the prevalence of behavioural problems and autism in a population-based group of children with hydrocephalus and to see whether learning disabilities, cerebral palsy (CP), epilepsy, myelomeningocele (MMC) or preterm birth increase the risk of these problems. In the 107 children with hydrocephalus born in western Sweden in 1989-1993, behaviour was assessed using the Conners' parent rating scales in 66 and the teacher's rating scales in 57. Autism was investigated using the Childhood Autism Rating Scale. Parents rated 67% of the children and teachers 39% of the children as having behavioural problems (>1.5 SD, or T score >65). Learning disabilities increased the risk significantly and almost all the children with CP and/or epilepsy had behavioural problems. Autism was present in nine children (13%), in 20% of those without MMC and in one of 26 with MMC. Autism was significantly more frequent in children with learning disabilities (27% vs. 7%) and in children with CP and/or epilepsy (33% vs. 6%). The majority of children with hydrocephalus have behavioural problems and many have autism. It is therefore important to assess and understand all the aspects of cognition and behaviour in these children in order to minimise disability and enhance participation for the child.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Robust penalty method for structural synthesis
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1983-01-01
The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.
A Study on Teaching Gases to Prospective Primary Science Teachers Through Problem-Based Learning
NASA Astrophysics Data System (ADS)
Senocak, Erdal; Taskesenligil, Yavuz; Sozbilir, Mustafa
2007-07-01
The aim of this study was to compare the achievement of prospective primary science teachers in a problem-based curriculum with those in a conventional primary science teacher preparation program with regard to success in learning about gases and developing positive attitudes towards chemistry. The subjects of the study were 101 first year undergraduate students, who were in two different classes and who were taught by the same lecturer. One of the classes was randomly selected as the intervention group in which problem-based learning (PBL) was used, and the other as the control in which conventional teaching methods were used. The data were obtained through use of the gases diagnostic test (GDT), the chemistry attitude scale (CAS), and scales specific to students’ evaluation of PBL such as the peer evaluation scale (PES), self evaluation scale (SES), tutor’s performance evaluation scale (TPES) and students’ evaluation of PBL scale (SEPBLS). Data were analysed using SPSS 10.0 (Statistical Package for Social Sciences). In order to find out the effect of the intervention (PBL) on students’ learning of gases, independent sample t-tests and ANCOVA (analysis of co-variance) were used. The results obtained from the study showed that there was a statistically significant difference between the experimental and control groups in terms of students’ GDT total mean scores and, their attitude towards chemistry, as well as PBL has a significant effect on the development of students’ skills such as self-directed learning, cooperative learning and critical thinking.
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
Steigen, Anne Mari; Bergh, Daniel
2018-02-05
This article analyses the psychometric properties of the Social Provisions Scale 10-items version. The Social Provisions Scale was analysed by means of the polytomous Rasch model, applied to data on 93 young adults (16-30 years) out of school or work, participating in different nature-based services, due to mental or drug-related problems. The psychometric analysis concludes that the original scale has difficulties related to targeting and construct validity. In order to improve the psychometric properties, the scale was modified to include eight items measuring functional support. The modification was based on theoretical and statistical considerations. After modifications the scale showed not only satisfying psychometric properties, but it also clarified uncertainties regarding construct validity of the measure. However, further analysis on larger samples are required. Implications for Rehabilitation Social support is important for a variety of rehabilitation outcomes and for different patient groups in the rehabilitation context, including people with mental health or drug-related problems. Social Provisions Scale may be used as a screening tool to assess social support of participants in rehabilitation, and the scale may also be an important instrument in rehabilitation research. There might be issues measuring structural support using a 10-items version of the Social Provisions Scale but it seemed to work well as an 8-item scale measuring functional support.
A Case Study in an Integrated Development and Problem Solving Environment
ERIC Educational Resources Information Center
Deek, Fadi P.; McHugh, James A.
2003-01-01
This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…
Kreuzthaler, Markus; Miñarro-Giménez, Jose Antonio; Schulz, Stefan
2016-01-01
Big data resources are difficult to process without a scaled hardware environment that is specifically adapted to the problem. The emergence of flexible cloud-based virtualization techniques promises solutions to this problem. This paper demonstrates how a billion of lines can be processed in a reasonable amount of time in a cloud-based environment. Our use case addresses the accumulation of concept co-occurrence data in MEDLINE annotation as a series of MapReduce jobs, which can be scaled and executed in the cloud. Besides showing an efficient way solving this problem, we generated an additional resource for the scientific community to be used for advanced text mining approaches.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
NASA Astrophysics Data System (ADS)
Patel, Ravi A.; Perko, Janez; Jacques, Diederik
2017-04-01
Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Empathy, Self-Reflection, and Curriculum Choice
ERIC Educational Resources Information Center
Grosseman, Suely; Hojat, Mohammadreza; Duke, Pamela M.; Mennin, Stewart; Rosenzweig, Steven; Novack, Dennis
2014-01-01
We administered the Jefferson Scale of Empathy and the Groningen Reflection Ability Scale to 61 of 64 entering medical students who self-selected a problem-based learning curricular track and to 163 of 198 who self-selected a lecture-based track (response rates of 95.3% and 82.3%, respectively, with no statistically significant differences in mean…
Mineral scale management. Part II, Fundamental chemistry
Alan W. Rudie; Peter W. Hart
2006-01-01
The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...
Fundamental chemistry of precipitation and mineral scale formation
Alan W. Rudie; Peter W. Hart
2005-01-01
The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process. As such, it is accurately described or modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that need to be met for a scale problem to exist, the acid base equilibria of the scaling anions...
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
Tay, Stacie; Alcock, Kat; Scior, Katrina
2018-03-24
To assess the prevalence of personal experiences of mental health problems among clinical psychologists, external, perceived, and self-stigma among them, and stigma-related concerns relating to disclosure and help-seeking. Responses were collected from 678 UK-based clinical psychologists through an anonymous web survey consisting of the Social Distance Scale, Stig-9, Military Stigma Scale, Secrecy Scale, Attitudes towards Seeking Professional Psychological Help Scale-Short Form, alongside personal experience and socio-demographic questions. Two-thirds of participants had experienced mental health problems themselves. Perceived mental health stigma was higher than external and self-stigma. Participants were more likely to have disclosed in their social than work circles. Concerns about negative consequences for self and career, and shame prevented some from disclosing and help-seeking. Personal experiences of mental health problems among clinical psychologists may be fairly common. Stigma, concerns about negative consequences of disclosure and shame as barriers to disclosure and help-seeking merit further consideration. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Konno, Yohko; Suzuki, Keiji
This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.
Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling
2016-07-05
developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding
McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron
2011-03-01
Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Vibration-based structural health monitoring of the aircraft large component
NASA Astrophysics Data System (ADS)
Pavelko, V.; Kuznetsov, S.; Nevsky, A.; Marinbah, M.
2017-10-01
In the presented paper there are investigated the basic problems of the local system of SHM of large scale aircraft component. Vibration-based damage detection is accepted as a basic condition, and main attention focused to a low-cost solution that would be attractive for practice. The conditions of small damage detection in the full scale structural component at low-frequency excitation were defined in analytical study and modal FEA. In experimental study the dynamic test of the helicopter Mi-8 tail beam was performed at harmonic excitation with frequency close to first natural frequency of the beam. The index of correlation coefficient deviation (CCD) was used for extraction of the features due to embedded pseudo-damage. It is shown that the problem of vibration-based detection of a small damage in the large scale structure at low-frequency excitation can be solved successfully.
Social problem-solving in Chinese baccalaureate nursing students.
Fang, Jinbo; Luo, Ying; Li, Yanhua; Huang, Wenxia
2016-11-01
To describe social problem solving in Chinese baccalaureate nursing students. A descriptive cross-sectional study was conducted with a cluster sample of 681 Chinese baccalaureate nursing students. The Chinese version of the Social Problem-Solving scale was used. Descriptive analyses, independent t-test and one-way analysis of variance were applied to analyze the data. The final year nursing students presented the highest scores of positive social problem-solving skills. Students with experiences of self-directed and problem-based learning presented significantly higher scores in Positive Problem Orientation subscale. The group with Critical thinking training experience, however, displayed higher negative problem solving scores compared with nonexperience group. Social problem solving abilities varied based upon teaching-learning strategies. Self-directed and problem-based learning may be recommended as effective way to improve social problem-solving ability. © 2016 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles
NASA Astrophysics Data System (ADS)
He, Yu; Lin, Wen-bin
2017-04-01
The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
The Development of a Sport-Based Life Skills Scale for Youth to Young Adults, 11-23 Years of Age
ERIC Educational Resources Information Center
Cauthen, Hillary Ayn
2013-01-01
The purpose of this study was to develop a sport-based life skills scale that assesses 20 life skills: goal setting, time management, communication, coping, problem solving, leadership, critical thinking, teamwork, self-discipline, decision making, planning, organizing, resiliency, motivation, emotional control, patience, assertiveness, empathy,…
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
Challet-Bouju, Gaëlle; Perrot, Bastien; Romo, Lucia; Valleur, Marc; Magalon, David; Fatséas, Mélina; Chéreau-Boudet, Isabelle; Luquiens, Amandine; Grall-Bronnec, Marie; Hardouin, Jean-Benoit
2016-01-01
Background and aims The aim of this study was to test the screening properties of several combinations of items from gambling scales, in order to harmonize screening of gambling problems in epidemiological surveys. The objective was to propose two brief screening tools (three items or less) for a use in interviews and self-administered questionnaires. Methods We tested the screening properties of combinations of items from several gambling scales, in a sample of 425 gamblers (301 non-problem gamblers and 124 disordered gamblers). Items tested included interview-based items (Pathological Gambling section of the DSM-IV, lifetime history of problem gambling, monthly expenses in gambling, and abstinence of 1 month or more) and self-report items (South Oaks Gambling Screen, Gambling Attitudes, and Beliefs Survey). The gold standard used was the diagnosis of a gambling disorder according to the DSM-5. Results Two versions of the Rapid Screener for Problem Gambling (RSPG) were developed: the RSPG-Interview (RSPG-I), being composed of two interview items (increasing bets and loss of control), and the RSPG-Self-Assessment (RSPG-SA), being composed of three self-report items (chasing, guiltiness, and perceived inability to stop). Discussion and conclusions We recommend using the RSPG-SA/I for screening problem gambling in epidemiological surveys, with the version adapted for each purpose (RSPG-I for interview-based surveys and RSPG-SA for self-administered surveys). This first triage of potential problem gamblers must be supplemented by further assessment, as it may overestimate the proportion of problem gamblers. However, a first triage has the great advantage of saving time and energy in large-scale screening for problem gambling. PMID:27348558
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method
NASA Astrophysics Data System (ADS)
Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru
2015-05-01
Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.
Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2013-01-01
Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984
An investigation of messy genetic algorithms
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley
1990-01-01
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.
2012-01-01
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640
Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio
2012-02-16
The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
Conners' Teacher Rating Scale for Preschool Children: A Revised, Brief, Age-Specific Measure
ERIC Educational Resources Information Center
Purpura, David J.; Lonigan, Christopher J.
2009-01-01
The Conners' Teacher Rating Scale-Revised (CTRS-R) is one of the most commonly used measures of child behavior problems. However, the scale length and the appropriateness of some of the items on the scale may reduce the usefulness of the CTRS-R for use with preschoolers. In this study, a Graded Response Model analysis based on Item Response Theory…
Learning Analysis of K-12 Students' Online Problem Solving: A Three-Stage Assessment Approach
ERIC Educational Resources Information Center
Hu, Yiling; Wu, Bian; Gu, Xiaoqing
2017-01-01
Problem solving is considered a fundamental human skill. However, large-scale assessment of problem solving in K-12 education remains a challenging task. Researchers have argued for the development of an enhanced assessment approach through joint effort from multiple disciplines. In this study, a three-stage approach based on an evidence-centered…
Iterative repair for scheduling and rescheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Deale, Michael
1991-01-01
An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
Therapists' causal attributions of clients' problems and selection of intervention strategies.
Royce, W S; Muehlke, C V
1991-04-01
Therapists' choices of intervention strategies are influenced by many factors, including judgments about the bases of clients' problems. To assess the relationships between such causal attributions and the selection of intervention strategies, 196 counselors, psychologists, and social workers responded to the written transcript of a client's interview by answering two questionnaires, a 1982 scale (Causal Dimension Scale by Russell) which measured causal attribution of the client's problem, and another which measured preference for emotional, rational, and active intervention strategies in dealing with the client, based on the 1979 E-R-A taxonomy of Frey and Raming. A significant relationship was found between the two sets of variables, with internal attributions linked to rational intervention strategies and stable attributions linked to active strategies. The results support Halleck's 1978 hypothesis that theories of psychotherapy tie interventions to etiological considerations.
A multiple scales approach to sound generation by vibrating bodies
NASA Technical Reports Server (NTRS)
Geer, James F.; Pope, Dennis S.
1992-01-01
The problem of determining the acoustic field in an inviscid, isentropic fluid generated by a solid body whose surface executes prescribed vibrations is formulated and solved as a multiple scales perturbation problem, using the Mach number M based on the maximum surface velocity as the perturbation parameter. Following the idea of multiple scales, new 'slow' spacial scales are introduced, which are defined as the usual physical spacial scale multiplied by powers of M. The governing nonlinear differential equations lead to a sequence of linear problems for the perturbation coefficient functions. However, it is shown that the higher order perturbation functions obtained in this manner will dominate the lower order solutions unless their dependence on the slow spacial scales is chosen in a certain manner. In particular, it is shown that the perturbation functions must satisfy an equation similar to Burgers' equation, with a slow spacial scale playing the role of the time-like variable. The method is illustrated by a simple one-dimenstional example, as well as by three different cases of a vibrating sphere. The results are compared with solutions obtained by purely numerical methods and some insights provided by the perturbation approach are discussed.
GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL
Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
addressed with distributed models that can compute runoff and erosion at different spatial a...
Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proposal
1991-11-01
OF THE PROBLEM AND CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0... DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 9 Figure 5: The flow of data in a DNA analysis system based on an...logarithmic scale and a linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits
A novel heuristic algorithm for capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre
2017-09-01
The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
ERIC Educational Resources Information Center
Hoie, B.; Sommerfelt, K.; Waaler, P. E.; Alsaker, F. D.; Skeidsvoll, H.; Mykletun, A.
2008-01-01
The combined burden of psychosocial (Achenbach scales), cognitive (Raven matrices), and executive function (EF) problems was studied in a population-based sample of 6- to 12-year-old children with epilepsy (n = 162; 99 males, 63 females) and in an age- and sex-matched control group (n = 107; 62 males, 45 females). Approximately 35% of the children…
Development of weighting value for ecodrainage implementation assessment criteria
NASA Astrophysics Data System (ADS)
Andajani, S.; Hidayat, D. P. A.; Yuwono, B. E.
2018-01-01
This research aim to generate weighting value for each factor and find out the most influential factor for identify implementation of ecodrain concept using loading factor and Cronbach Alpha. The drainage problem especially in urban areas are getting more complex and need to be handled as soon as possible. Flood and drought problem can’t be solved by the conventional paradigm of drainage (to drain runoff flow as faster as possible to the nearest drainage area). The new paradigm of drainage that based on environmental approach called “ecodrain” can solve both of flood and drought problems. For getting the optimal result, ecodrain should be applied in smallest scale (domestic scale), until the biggest scale (city areas). It is necessary to identify drainage condition based on environmental approach. This research implement ecodrain concept by a guidelines that consist of parameters and assessment criteria. It was generating the 2 variables, 7 indicators and 63 key factors from previous research and related regulations. the conclusion of the research is the most influential indicator on technical management variable is storage system, while on non-technical management variable is government role.
Hogue, Aaron; Dauber, Sarah; Henderson, Craig E
2014-01-01
This study introduces a therapist-report measure of evidence-based practices for adolescent conduct and substance use problems. The Inventory of Therapy Techniques-Adolescent Behavior Problems (ITT-ABP) is a post-session measure of 27 techniques representing four approaches: cognitive-behavioral therapy (CBT), family therapy (FT), motivational interviewing (MI), and drug counseling (DC). A total of 822 protocols were collected from 32 therapists treating 71 adolescents in six usual care sites. Factor analyses identified three clinically coherent scales with strong internal consistency across the full sample: FT (8 items; α = .79), MI/CBT (8 items; α = .87), and DC (9 items, α = .90). The scales discriminated between therapists working in a family-oriented site versus other sites and showed moderate convergent validity with therapist reports of allegiance and skill in each approach. The ITT-ABP holds promise as a cost-efficient quality assurance tool for supporting high-fidelity delivery of evidence-based practices in usual care.
Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.
Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong
2017-08-09
For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
[Relationship Between Child Behavior and Emotional Problems and School Based Effort Avoidance].
Weber, Hanna Maria; Büttner, Peter; Rücker, Stefan; Petermann, Franz
2015-01-01
The present study has examined the relationship between school based effort avoidance tendencies and problem behavior in children aged 9 to 16 years. Effort avoidance tendencies were assessed in 367 children with and without child care. Teachers and social workers rated children on behavioral and emotional problems with the Strengths and Difficulties Questionnaire (SDQ). Results confirmed significant but low correlations between teacher ratings of behavior and emotional problems in children and selected subscales of self-reported effort avoidance in school, especially for children in child care institutions. For them "conduct problems" were significantly correlated with three of the four subscales and the total sum score of effort avoidance whereas "hyperactivity" was the only scale which was significantly associated with the fourth subscale. In the school sample only "hyperactivity" and "peer problems" were significantly correlated with one subscale of school-based effort avoidance. The findings suggest that more problem behavior is in relation to more school based effort avoidance tendencies.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
Engineering large-scale agent-based systems with consensus
NASA Technical Reports Server (NTRS)
Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.
1994-01-01
The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Yardimci, Figen; Bektaş, Murat; Özkütük, Nilay; Muslu, Gonca Karayağız; Gerçeker, Gülçin Özalp; Başbakkal, Zümrüt
2017-01-01
The study process is related to students' learning approaches and styles. Motivation resources and problems determine students' internal, external, and negative motivation. Analyzing the study process and motivation of students yields important indications about the nature of educational systems in higher education. This study aims to analyze the relationship between the study process, and motivation resources and problems with regard to nursing students in different educational systems in Turkey and to reveal their effects according to a set of variables. This is a descriptive, cross-sectional and correlational study. Traditional, integrated and problem-based learning (PBL) educational programs for nurses involving students from three nursing schools in Turkey. Nursing students (n=330). The data were collected using the Study Process Questionnaire (R-SPQ-2F) and the Motivation Resources and Problems (MRP) Scale. A statistically significant difference was found between the scores on the study process scale, and motivation resources and problems scale among the educational systems. This study determined that the mean scores of students in the PBL system on learning approaches, intrinsic motivation and negative motivation were higher. A positive significant correlation was found between the scales. The study process, and motivation resources and problems were found to be affected by the educational system. This study determined that the PBL educational system more effectively increases students' intrinsic motivation and helps them to acquire learning skills. Copyright © 2016 Elsevier Ltd. All rights reserved.
Valuation of Child Behavioral Problems from the Perspective of US Adults
Craig, Benjamin M.; Brown, Derek S.; Reeve, Bryce B.
2015-01-01
OBJECTIVE To assess preferences between child behavioral problems and estimate their value on a quality-adjusted life year (QALYs) scale. METHODS Respondents, age 18 or older, drawn from a nationally representative panel between August 2012 and February 2013 completed a series of paired comparisons, each involving a choice between 2 different behavioral problems described using the Behavioral Problems Index (BPI), a 28-item instrument with 6 domains (Anxious/Depressed, Headstrong, Hyperactive, Immature Dependency, Anti-social, and Peer Conflict/Social Withdrawal). Each behavioral problem lasted 1 or 2 years for an unnamed child, age 7 or 10 years, with no suggested relationship to the respondent. Generalized linear model analyses estimated the value of each problem on a QALY scale, considering its duration and child’s age. RESULTS Among 5207 eligible respondents, 4155 (80%) completed all questions. Across the 6 domains, problems relating to antisocial behavior were the least preferred, particularly the items related to cheating, lying, bullying, and cruelty to others. CONCLUSIONS The findings are the first to produce a preference-based summary measure of child behavioral problems on a QALY scale. The results may inform both clinical practice and resource allocation decisions by enhancing our understanding of difficult tradeoffs in how adults view child behavioral problems. Understanding US values also promotes national health surveillance by complementing conventional measures of surveillance, survival, and diagnoses. PMID:26209476
Valuation of Child Behavioral Problems from the Perspective of US Adults.
Craig, Benjamin M; Brown, Derek S; Reeve, Bryce B
2016-02-01
To assess preferences between child behavioral problems and estimate their value on a quality-adjusted life year (QALY) scale. Respondents, age 18 or older, drawn from a nationally representative panel between August 2012 and February 2013 completed a series of paired comparisons, each involving a choice between 2 different behavioral problems described using the Behavioral Problems Index (BPI), a 28-item instrument with 6 domains (Anxious/Depressed, Headstrong, Hyperactive, Immature Dependency, Anti-social, and Peer Conflict/Social Withdrawal). Each behavioral problem lasted 1 or 2 years for an unnamed child, age 7 or 10 years, with no suggested relationship to the respondent. Generalized linear model analyses estimated the value of each problem on a QALY scale, considering its duration and the child's age. Among 5207 eligible respondents, 4155 (80%) completed all questions. Across the 6 domains, problems relating to antisocial behavior were the least preferred, particularly the items related to cheating, lying, bullying, and cruelty to others. The findings are the first to produce a preference-based summary measure of child behavioral problems on a QALY scale. The results may inform both clinical practice and resource allocation decisions by enhancing our understanding of difficult tradeoffs in how adults view child behavioral problems. Understanding US values also promotes national health surveillance by complementing conventional measures of surveillance, survival, and diagnoses. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui
2016-07-01
In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.
Low frequency full waveform seismic inversion within a tree based Bayesian framework
NASA Astrophysics Data System (ADS)
Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe
2018-01-01
Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.
Using the DPSIR Framework to Develop a Conceptual Model: Technical Support Document
Modern problems (e.g., pollution, urban sprawl, environmental equity) are complex and often transcend spatial and temporal scales. Systems thinking is an approach to problem solving that is based on the belief that the component parts of a system are best understood in the contex...
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
NASA Astrophysics Data System (ADS)
Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia
2016-04-01
In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.
Hassanzadeh, Akbar; Heidari, Zahra; Hassanzadeh Keshteli, Ammar; Afshar, Hamid
2017-01-01
Objective The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress), measured by Hospital Anxiety and Depression Scale (HADS) and General Health Questionnaire (GHQ-12), as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs) questionnaire, as the latent predictors. Results The results showed that the personal stressors domain has significant positive association with psychological distress (β = 0.19), anxiety (β = 0.25), depression (β = 0.15), and their collective profile score (β = 0.20), with greater associations in females (β = 0.28) than in males (β = 0.13) (all P < 0.001). In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P < 0.001). Conclusion Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems. PMID:29312459
Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
1992-08-01
A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.
Distributed intelligent urban environment monitoring system
NASA Astrophysics Data System (ADS)
Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang
2018-02-01
The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
Effective field theory analysis on μ problem in low-scale gauge mediation
NASA Astrophysics Data System (ADS)
Zheng, Sibo
2012-02-01
Supersymmetric models based on the scenario of gauge mediation often suffer from the well-known μ problem. In this paper, we reconsider this problem in low-scale gauge mediation in terms of effective field theory analysis. In this paradigm, all high energy input soft mass can be expressed via loop expansions. If the corrections coming from messenger thresholds are small, as we assume in this letter, then all RG evaluations can be taken as linearly approximation for low-scale supersymmetric breaking. Due to these observations, the parameter space can be systematically classified and studied after constraints coming from electro-weak symmetry breaking are imposed. We find that some old proposals in the literature are reproduced, and two new classes are uncovered. We refer to a microscopic model, where the specific relations among coefficients in one of the new classes are well motivated. Also, we discuss some primary phenomenologies.
Primi, Ricardo
2014-09-01
Ability testing has been criticized because understanding of the construct being assessed is incomplete and because the testing has not yet been satisfactorily improved in accordance with new knowledge from cognitive psychology. This article contributes to the solution of this problem through the application of item response theory and Susan Embretson's cognitive design system for test development in the development of a fluid intelligence scale. This study is based on findings from cognitive psychology; instead of focusing on the development of a test, it focuses on the definition of a variable for the creation of a criterion-referenced measure for fluid intelligence. A geometric matrix item bank with 26 items was analyzed with data from 2,797 undergraduate students. The main result was a criterion-referenced scale that was based on information from item features that were linked to cognitive components, such as storage capacity, goal management, and abstraction; this information was used to create the descriptions of selected levels of a fluid intelligence scale. The scale proposed that the levels of fluid intelligence range from the ability to solve problems containing a limited number of bits of information with obvious relationships through the ability to solve problems that involve abstract relationships under conditions that are confounded with an information overload and distraction by mixed noise. This scale can be employed in future research to provide interpretations for the measurements of the cognitive processes mastered and the types of difficulty experienced by examinees. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
Can microbes economically remove sulfur
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, J.L.
Researchers have reported that refiners who now rely on costly physic-chemical procedures to desulfurize petroleum will soon have an alternative microbial-enzyme-based approach to this process. This new approach is still under development and considerable number chemical engineering problems need to be solved before this process is ready for large-scale use. This paper reviews the several research projects dedicated solving the problems that keep a biotechnology-based alternative from competing with chemical desulfurization.
Multiscale functions, scale dynamics, and applications to partial differential equations
NASA Astrophysics Data System (ADS)
Cresson, Jacky; Pierret, Frédéric
2016-05-01
Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.
Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra
NASA Astrophysics Data System (ADS)
Wang, Hefeng; Fan, Heng; Li, Fuli
2014-01-01
When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.
Networked high-speed auroral observations combined with radar measurements for multi-scale insights
NASA Astrophysics Data System (ADS)
Hirsch, M.; Semeter, J. L.
2015-12-01
Networks of ground-based instruments to study terrestrial aurora for the purpose of analyzing particle precipitation characteristics driving the aurora have been established. Additional funding is pouring into future ground-based auroral observation networks consisting of combinations of tossable, portable, and fixed installation ground-based legacy equipment. Our approach to this problem using the High Speed Tomography (HiST) system combines tightly-synchronized filtered auroral optical observations capturing temporal features of order 10 ms with supporting measurements from incoherent scatter radar (ISR). ISR provides a broader spatial context up to order 100 km laterally on one minute time scales, while our camera field of view (FOV) is chosen to be order 10 km at auroral altitudes in order to capture 100 m scale lateral auroral features. The dual-scale observations of ISR and HiST fine-scale optical observations may be coupled through a physical model using linear basis functions to estimate important ionospheric quantities such as electron number density in 3-D (time, perpendicular and parallel to the geomagnetic field).Field measurements and analysis using HiST and PFISR are presented from experiments conducted at the Poker Flat Research Range in central Alaska. Other multiscale configuration candidates include supplementing networks of all-sky cameras such as THEMIS with co-locations of HiST-like instruments to fuse wide FOV measurements with the fine-scale HiST precipitation characteristic estimates. Candidate models for this coupling include GLOW and TRANSCAR. Future extensions of this work may include incorporating line of sight total electron count estimates from ground-based networks of GPS receivers in a sensor fusion problem.
AMD in the Iberian Pyrite Belt is a problem of global scale. Successful implementation of passive treatment systems could remediate at least part of this problem at reasonable costs. However, initial trials with ALD and RAPS based on gravel size limestone failed due to rapid loss...
ERIC Educational Resources Information Center
Tatner, Mary; Tierney, Anne
2016-01-01
The development and evaluation of a two-week laboratory class, based on the diagnosis of human infectious diseases, is described. It can be easily scaled up or down, to suit class sizes from 50 to 600 and completed in a shorter time scale, and to different audiences as desired. Students employ a range of techniques to solve a real-life and…
Patrick, Christopher J.; Kramer, Mark D.; Krueger, Robert F.; Markon, Kristian E.
2014-01-01
The Externalizing Spectrum Inventory (ESI; Krueger, Markon, Patrick, Benning, & Kramer, 2007) provides for integrated, hierarchical assessment of a broad range of problem behaviors and traits in the domain of deficient impulse control. The ESI assesses traits and problems in this domain through 23 lower-order facet scales organized around three higher-order dimensions, reflecting general disinhibition, callous-aggression, and substance abuse. The full-form ESI contains 415 items, and a shorter form would be useful for questionnaire screening studies or multi-domain research protocols. The current work employed item response theory and structural modeling methods to create a 160-item brief form (ESI-bf) that provides for efficient measurement of the ESI’s lower-order facets and quantification of its higher-order dimensions either as scale-based factors or as item-based composites. The ESI-bf is recommended for use in research on psychological or neurobiological correlates of problems such as risk-taking, delinquency, aggression, and substance abuse, and studies of general and specific mechanisms that give rise to problems of these kinds. PMID:24320765
The impacts of problem gambling on concerned significant others accessing web-based counselling.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
2014-08-01
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
The Validation of the Active Learning in Health Professions Scale
ERIC Educational Resources Information Center
Kammer, Rebecca; Schreiner, Laurie; Kim, Young K.; Denial, Aurora
2015-01-01
There is a need for an assessment tool for evaluating the effectiveness of active learning strategies such as problem-based learning in promoting deep learning and clinical reasoning skills within the dual environments of didactic and clinical settings in health professions education. The Active Learning in Health Professions Scale (ALPHS)…
Recursive renormalization group theory based subgrid modeling
NASA Technical Reports Server (NTRS)
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
Solving large scale traveling salesman problems by chaotic neurodynamics.
Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki
2002-03-01
We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.
NASA Astrophysics Data System (ADS)
Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie
2018-06-01
Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.
NASA Astrophysics Data System (ADS)
Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie
2017-09-01
Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.
NASA Astrophysics Data System (ADS)
Tamura, Tetsuro; Kawaguchi, Masaharu; Kawai, Hidenori; Tao, Tao
2017-11-01
The connection between a meso-scale model and a micro-scale large eddy simulation (LES) is significant to simulate the micro-scale meteorological problem such as strong convective events due to the typhoon or the tornado using LES. In these problems the mean velocity profiles and the mean wind directions change with time according to the movement of the typhoons or tornadoes. Although, a fine grid micro-scale LES could not be connected to a coarse grid meso-scale WRF directly. In LES when the grid is suddenly refined at the interface of nested grids which is normal to the mean advection the resolved shear stresses decrease due to the interpolation errors and the delay of the generation of smaller scale turbulence that can be resolved on the finer mesh. For the estimation of wind gust disaster the peak wind acting on buildings and structures has to be correctly predicted. In the case of meteorological model the velocity fluctuations have a tendency of diffusive variation without the high frequency component due to the numerically filtering effects. In order to predict the peak value of wind velocity with good accuracy, this paper proposes a LES-based method for generating the higher frequency components of velocity and temperature fields obtained by meteorological model.
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-07-18
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.
Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming
2013-01-01
Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes. PMID:23873409
NASA Astrophysics Data System (ADS)
Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
NASA Astrophysics Data System (ADS)
Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas
2017-04-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
On the Role of Surface Friction in Tropical Intraseasonal Oscillation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode
1999-01-01
The Madden-Julian oscillation (MJO), or the tropical intraseasonal oscillation, has attracted much attention, ever since its discovery in the early seventies for reasons of both scientific understanding and practical forecasts. Among the theoretical interpretations of the MJO, the wave-CISK (conditional instability of the second kind) mechanism is the most popular. The basic idea of the wave-CISK interpretation is that the cooperation between the low-level convergence associated with the eastward moving Kelvin wave and the cumulus convection generates an eastward moving Kelvin-wave-like mode. Later it was recognized that the MJO has an important Rossby-wave-like component. However linear analysis and numerical simulations based on it (even when conditional heating is used) have revealed two problems with the wave-CISK interpretation; i.e., excessive speed and the most preferred scale being zero or grid scale. Chao (1995) presented a discussion of these problems and attributed these problems to the particular type of expression for the cumulus heating used in the linear analyses and numerical studies (i.e., the convective heating is proportional to low-level convergence and a fixed vertical heating profile). It should be pointed out that in the relatively successful simulation of MJO with general circulation models the problem of grid scale being the most preferred scale does not appear and die problem of excessive speed is not as severe as in the linear analysis.
Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources
Leeson, Mark S.
2014-01-01
The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas
2016-05-01
The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.
Srilatha, Adepu; Doshi, Dolar; Reddy, Madupu Padma; Kulkarni, Suhas; Reddy, Bandari Srikanth
2016-01-01
Oral health has strong biological, psychological, and social projections, which influence the quality of life. Thus, developing a common vision and a comprehensive approach to address children's social, emotional, and behavioral health needs is an integral part of the child and adolescent's overall health. To assess and compare the behavior and emotional difficulties among 15-year-olds and to correlate it with their dentition status based on gender. Study Settings and Design: A cross-sectional questionnaire study among 15-year-old schoolgoing children in six private schools in Dilsukhnagar, Hyderabad, India. The behavior and emotional difficulties were assessed using self-reported Strengths and Difficulties Questionnaire (SDQ). The dentition status was recorded by the criteria given by the World Health Organization (WHO) in the Basic Oral Health Survey Assessment Form (1997). Independent Student's t-test was used for comparison among the variables. Correlation between scales of SDQ and dentition status was done using Karl Pearson's correlation coefficient method. Girls reported more emotional problems and good prosocial behavior and males had more conduct problems, hyperactivity, peer problems, and total difficulty problems. Total decayed-missing-filled teeth (DMFT) and decayed component were significantly and positively correlated with total difficulty, emotional symptom, and conduct problems scale while missing component was correlated with the hyperactivity scale and filled component with prosocial behavior. DMFT and its components showed an association with all scales of SDQ except for peer problem scale. Thus, the oral health of children was significantly influenced by behavioral and emotional difficulties; so, changes in the mental health status will affect the oral health of children.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Rojahn, J; Rowe, E W; Sharber, A C; Hastings, R; Matson, J L; Didden, R; Kroes, D B H; Dumont, E L M
2012-05-01
The Behavior Problems Inventory-01 (BPI-01) is an informant-based behaviour rating instrument that was designed to assess maladaptive behaviours in individuals with intellectual disabilities (ID). Its items fall into one of three sub-scales: Self-injurious Behavior (14 items), Stereotyped Behavior (24 items), and Aggressive/Destructive Behavior (11 items). Each item is rated on a frequency scale (0 = never to 4 = hourly), and a severity scale (0 = no problem to 3 = severe problem). The BPI-01 has been successfully used in several studies and has shown acceptable to very good psychometric properties. One concern raised by some investigators was the large number of items on the BPI-01, which has reduced its user friendliness for certain applications. Furthermore, researchers and clinicians were often uncertain how to interpret their BPI-01 data without norms or a frame of reference. The Behavior Problems Inventory-Short Form (BPI-S) was empirically developed, based on an aggregated archival data set of BPI-01 data from individuals with ID from nine locations in the USA, Wales, England, the Netherlands, and Romania (n = 1122). The BPI-S uses the same rating system and the same three sub-scales as the BPI-01, but has fewer items: Self-injurious Behavior (8 items), Stereotyped Behavior (12 items), and Aggressive/Destructive Behavior (10 items). Rating anchors for the severity scales of the Self-injurious Behavior and the Aggressive/Destructive Behavior sub-scales were added in an effort to enhance the objectivity of the ratings. The sensitivity of the BPI-S compared with the BPI-01 was high (0.92 to 0.99), and so were the correlations between the analogous BPI-01 and the BPI-S sub-scales (0.96 to 0.99). Means and standard deviations were generated for both BPI versions in a Sex-by-age matrix, and in a Sex-by-ID Level matrix. Combined sex ranges are also provided by age and level of ID. In summary, the BPI-S is a very useful alternative to the BPI-01, especially for research and evaluation purposes involving groups of individuals. © 2011 The Authors. Journal of Intellectual Disability Research © 2011 Blackwell Publishing Ltd.
Parent–Youth Agreement on Self-Reported Competencies of Youth With Depressive and Suicidal Symptoms
Mbekou, Valentin; MacNeil, Sasha; Gignac, Martin; Renaud, Johanne
2015-01-01
Objective: A multi-informant approach is often used in child psychiatry. The Achenbach System of Empirically Based Assessment uses this approach, gathering parent reports on the Child Behaviour Checklist (CBCL) and youth reports on the Youth Self-Report (YSR), which contain scales assessing both the child’s problems and competencies. Agreement between parent and youth perceptions of their competencies on these forms has not been studied to date. Method: Our study examined the parent–youth agreement of competencies on the CBCL and YSR from a sample of 258 parent–youth dyads referred to a specialized outpatient clinic for depressive and suicidal disorders. Intraclass correlation coefficients were calculated for all competency scales (activity, social, and academic), with further examinations based on youth’s sex, age, and type of problem. Results: Weak-to-moderate parent–youth agreements were reported on the activities and social subscales. For the activities subscale, boys’ ratings had a strong correlation with parents’ ratings, while it was weak for girls. Also, agreement on activities and social subscales was stronger for dyads with the youth presenting externalizing instead of internalizing problems. Conclusion: Agreement on competencies between parents and adolescents varied based on competency and adolescent sex, age, and type of problem. PMID:25886673
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices
NASA Astrophysics Data System (ADS)
Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.
2017-01-01
Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
NASA Astrophysics Data System (ADS)
Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.
2016-04-01
The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Katz, C M
1991-04-01
Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.
ERIC Educational Resources Information Center
Mcdermott, Paul A.; Watkins, Marley W.; Drogalis, Anna Rhoad; Chao, Jessica L.; Worrell, Frank C.; Hall, Tracey E.
2016-01-01
Contextually based assessments reveal the circumstances accompanying maladjustment (the when, where, and with whom) and supply clues to the motivations underpinning problem behaviors. The Adjustment Scales for Children and Adolescents (ASCA) is a teacher rating scale composed of indicators describing behavior in 24 classroom situational contexts.…
Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity
ERIC Educational Resources Information Center
Volpe, Robert J.; Briesch, Amy M.
2015-01-01
Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…
Mi, Misa; Halalau, Alexandra
2016-07-03
To explore possible relationships between residents' lifelong learning orientation, skills in practicing evidence-based medicine (EBM), and perceptions of the environment for learning and practicing EBM. This was a pilot study with a cross-sectional survey design. Out of 60 residents in a medical residency program, 29 participated in the study. Data were collected using a survey that comprised three sections: the JeffSPLL Scale, EBM Environment Scale, and an EBM skill questionnaire. Data were analyzed using SPSS and were reported with descriptive and inferential statistics (mean, standard deviation, Pearson's correlation, and a two-sample t-test). Mean scores on the JeffSPLL Scale were significantly correlated with perceptions of the EBM Scale and use of EBM resources to keep up to date or solve a specific patient care problem. There was a significant correlation between mean scores on the EBM Scale and hours per week spent in reading medical literature to solve a patient care problem. Two-sample t-tests show that residents with previous training in research methods had significantly higher scores on the JeffSPLL Scale (p=0.04), EBM Scale (p=0.006), and self-efficacy scale (p =0.024). Given the fact that physicians are expected to be lifelong learners over the course of their professional career, developing residents' EBM skills and creating interventions to improve specific areas in the EBM environment would likely foster residents' lifelong learning orientation.
ERIC Educational Resources Information Center
Supplee, Lauren H.; Metz, Allison
2014-01-01
Despite a robust body of evidence of effectiveness of social programs, few evidence-based programs have been scaled for population-level improvement in social problems. Since 2010 the federal government has invested in evidence-based social policy by supporting a number of new evidence-based programs and grant initiatives. These initiatives…
The need for an intermediate mass scale in GUTs
NASA Technical Reports Server (NTRS)
Shafi, Q.
1983-01-01
The minimal SU(5) grand unified field theory (GUT) model fails to resolve the strong charge parity (CP) problem, suffers from the cosmological monopole problem, sheds no light on the nature of the 'dark' mass in the universe, and predicts an unacceptably low value for the baryon asymmetry. All these problems can be overcome in suitable grand unified axion models with an intermediate mass scale of about 10 to the 11th power to 10 to the 12th power GeV. An example based on the gauge group SO(10) is presented. Among other things, it predicts that the axions comprise the 'dark' mass in the universe, and that there exists a galactic monopole flux of 10 to the -8th power to 10 to the -7th power/sq cm/yr. Other topics that are briefly discussed include proton decay, family symmetry, neutrino masses and the gauge hierarchy problem.
Distance majorization and its applications
Chi, Eric C.; Zhou, Hua; Lange, Kenneth
2014-01-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Magnitude, moment, and measurement: The seismic mechanism controversy and its resolution.
Miyake, Teru
This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways. Copyright © 2017 Elsevier Ltd. All rights reserved.
Traits Without Borders: Integrating Functional Diversity Across Scales.
Carmona, Carlos P; de Bello, Francesco; Mason, Norman W H; Lepš, Jan
2016-05-01
Owing to the conceptual complexity of functional diversity (FD), a multitude of different methods are available for measuring it, with most being operational at only a small range of spatial scales. This causes uncertainty in ecological interpretations and limits the potential to generalize findings across studies or compare patterns across scales. We solve this problem by providing a unified framework expanding on and integrating existing approaches. The framework, based on trait probability density (TPD), is the first to fully implement the Hutchinsonian concept of the niche as a probabilistic hypervolume in estimating FD. This novel approach could revolutionize FD-based research by allowing quantification of the various FD components from organismal to macroecological scales, and allowing seamless transitions between scales. Copyright © 2016 Elsevier Ltd. All rights reserved.
Managing Network Partitions in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif
Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.
A study of patients with spinal disease using Maudsley Personality Inventory.
Kasai, Yuichi; Takegami, Kenji; Uchida, Atsumasa
2004-02-01
We administered the Maudsley Personality Inventory (MPI) preoperatively to 303 patients with spinal diseases about to undergo surgery. Patients younger than 20 years, patients previously treated in the Department of Psychiatry, and patients with poor postoperative results were excluded. Patients with N-scores (neuroticism scale) of 39 points or greater or L-scores (lie scale) of 26 points or greater were regarded as "abnormal." Based on clinical definitions we identified 24 "problem patients" during the course and categorized them as "Unsatisfied," "Indecisive," "Doctor shoppers," or "Distrustful." Preoperative MPI categorized 26 patients as abnormal; 22 patients categorized as abnormal became problem patients ( p<0.001). MPI sensitivity and specificity was 84.6% and 99.3%, respectively. Preoperative MPI to patients with spinal disease was found to be useful in detecting problem patients.
Finding the strong CP problem at the LHC
NASA Astrophysics Data System (ADS)
D'Agnolo, Raffaele Tito; Hook, Anson
2016-11-01
We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Tilley, Barbara C.; LaPelle, Nancy R.; Goetz, Christopher G.; Stebbins, Glenn T.
2016-01-01
Background Cognitive pretesting, a qualitative step in scale development, precedes field testing and assesses the difficulty of instrument completion for examiners and respondents. Cognitive pretesting assesses respondent interest, attention span, discomfort, and comprehension, and highlights problems with the logical structure of questions/response options that can affect understanding. In the past this approach was not consistently used in the development or revision of movement disorders scales. Methods We applied qualitative cognitive pretesting using testing guides in development of the Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS). The guides were based on qualitative techniques, verbal probing and “think-aloud” interviewing, to identify problems with the scale from the patient and rater perspectives. English-speaking Parkinson’s disease patients and movement disorders specialists (raters) from multiple specialty clinics in the United States, Western Europe and Canada used the MDS-UPDRS and completed the testing guides. Results Two rounds of cognitive pretesting were necessary before proceeding to field testing of the revised scale to assess clinimetric properties. Scale revisions based on cognitive pretesting included changes in phrasing, simplification of some questions, and addition of a reassuring statement explaining that not all PD patients experience the symptoms described in the questions. Conclusions The strategy of incorporating cognitive pretesting into scale development and revision provides a model for other movement disorders scales. Cognitive pretesting is being used in translating the MDS-UPDRS into multiple languages to improve comprehension and acceptance and in the development of a new Unified Dyskinesia Rating Scale for Parkinson’s disease patients. PMID:24613868
Hatzichristou, Dimitris; Kirana, Paraskevi-Sofia; Banner, Linda; Althof, Stanley E; Lonnee-Hoffmann, Risa A M; Dennerstein, Lorraine; Rosen, Raymond C
2016-08-01
A detailed sexual history is the cornerstone for all sexual problem assessments and sexual dysfunction diagnoses. Diagnostic evaluation is based on an in-depth sexual history, including sexual and gender identity and orientation, sexual activity and function, current level of sexual function, overall health and comorbidities, partner relationship and interpersonal factors, and the role of cultural and personal expectations and attitudes. To propose key steps in the diagnostic evaluation of sexual dysfunctions, with special focus on the use of symptom scales and questionnaires. Critical assessment of the current literature by the International Consultation on Sexual Medicine committee. A revised algorithm for the management of sexual dysfunctions, level of evidence, and recommendation for scales and questionnaires. The International Consultation on Sexual Medicine proposes an updated algorithm for diagnostic evaluation of sexual dysfunction in men and women, with specific recommendations for sexual history taking and diagnostic evaluation. Standardized scales, checklists, and validated questionnaires are additional adjuncts that should be used routinely in sexual problem evaluation. Scales developed for specific patient groups are included. Results of this evaluation are presented with recommendations for clinical and research uses. Defined principles, an algorithm and a range of scales may provide coherent and evidence based management for sexual dysfunctions. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Bhaumik, S.; Watson, J. M.; Thorp, C. F.; Tyrer, F.; McGrother, C. W.
2008-01-01
Background: Previous studies of weight problems in adults with intellectual disability (ID) have generally been small or selective and given conflicting results. The objectives of our large-scale study were to identify inequalities in weight problems between adults with ID and the general adult population, and to investigate factors associated…
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
An Investigation of the General Abilities Index in a Group of Diagnostically Mixed Patients
ERIC Educational Resources Information Center
Harrison, Allyson G.; DeLisle, Michelle M.; Parker, Kevin C. H.
2008-01-01
The General Ability Index (GAI) was compared with Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) Full Scale Intelligence Quotient (FSIQ) from the WAIS-III in data obtained from 381 adults assessed for reported learning or attention problems between 1998 and 2005. Not only did clients with more neurocognitively based disorders (i.e.,…
ERIC Educational Resources Information Center
Koparan, Timur
2016-01-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study…
ERIC Educational Resources Information Center
Camparo, James; Camparo, Lorinda B.
2013-01-01
Though ubiquitous, Likert scaling's traditional mode of analysis is often unable to uncover all of the valid information in a data set. Here, the authors discuss a solution to this problem based on methodology developed by quantum physicists: the state multipole method. The authors demonstrate the relative ease and value of this method by…
Continuum-Kinetic Models and Numerical Methods for Multiphase Applications
NASA Astrophysics Data System (ADS)
Nault, Isaac Michael
This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
NASA Astrophysics Data System (ADS)
Çiğdem Özcan, Zeynep
2016-04-01
Studies highlight that using appropriate strategies during problem solving is important to improve problem-solving skills and draw attention to the fact that using these skills is an important part of students' self-regulated learning ability. Studies on this matter view the self-regulated learning ability as key to improving problem-solving skills. The aim of this study is to investigate the relationship between mathematical problem-solving skills and the three dimensions of self-regulated learning (motivation, metacognition, and behaviour), and whether this relationship is of a predictive nature. The sample of this study consists of 323 students from two public secondary schools in Istanbul. In this study, the mathematics homework behaviour scale was administered to measure students' homework behaviours. For metacognition measurements, the mathematics metacognition skills test for students was administered to measure offline mathematical metacognitive skills, and the metacognitive experience scale was used to measure the online mathematical metacognitive experience. The internal and external motivational scales used in the Programme for International Student Assessment (PISA) test were administered to measure motivation. A hierarchic regression analysis was conducted to determine the relationship between the dependent and independent variables in the study. Based on the findings, a model was formed in which 24% of the total variance in students' mathematical problem-solving skills is explained by the three sub-dimensions of the self-regulated learning model: internal motivation (13%), willingness to do homework (7%), and post-problem retrospective metacognitive experience (4%).
2010-01-01
Background School based mental health programs are absent in most educational institutions for intellectually disabled children and adolescents in Nigeria and co-morbid behavioral problems often complicate intellectual disability in children and adolescents receiving special education instructions. Little is known about prevalence and pattern of behavioral problems existing co-morbidly among sub-Saharan African children with intellectual disability. This study assessed the prevalence and pattern of behavioral problems among Nigerian children with intellectual disability and also the associated factors. Method Teachers' rated Strengths and Difficulties Questionnaire (SDQ) was used to screen for behavioral problems among children with intellectual disability in a special education facility in south eastern Nigeria. Socio-demographic questionnaire was used to obtain socio-demographic information of the children. Results A total of forty four (44) children with intellectual disability were involved in the study. Twenty one (47.7%) of the children were classified as having behavioral problems in the borderline and abnormal categories on total difficulties clinical scale of SDQ using the cut-off point recommended by Goodman. Mild mental retardation as compared to moderate, severe and profound retardation was associated with highest total difficulties mean score. Males were more likely to exhibit conduct and hyperactivity behavioral problems compared to the females. The inter-clinical scales correlations of teachers' rated SDQ in the studied population also showed good internal consistency (Cronbach Alpha = 0.63). Conclusion Significant behavioral problems occur co-morbidly among Nigerian children with intellectual disability receiving special education instructions and this could impact negatively on educational learning and other areas of functioning. There is an urgent need for establishing school-based mental health program and appropriate screening measure in this environment. These would afford early identification of intellectually disabled children with behavioral problems and appropriate referral for clinical evaluation and interventions. The need to focus policy making attention on hidden burden of intellectual disability in sub-Saharan African children is essential. PMID:20465841
Innovative mathematical modeling in environmental remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Gour T.; National Central Univ.; Univ. of Central Florida
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out aremore » used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.« less
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
Joost, S; Colli, L; Baret, P V; Garcia, J F; Boettcher, P J; Tixier-Boichard, M; Ajmone-Marsan, P
2010-05-01
In livestock genetic resource conservation, decision making about conservation priorities is based on the simultaneous analysis of several different criteria that may contribute to long-term sustainable breeding conditions, such as genetic and demographic characteristics, environmental conditions, and role of the breed in the local or regional economy. Here we address methods to integrate different data sets and highlight problems related to interdisciplinary comparisons. Data integration is based on the use of geographic coordinates and Geographic Information Systems (GIS). In addition to technical problems related to projection systems, GIS have to face the challenging issue of the non homogeneous scale of their data sets. We give examples of the successful use of GIS for data integration and examine the risk of obtaining biased results when integrating datasets that have been captured at different scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...
2017-10-28
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Lessons Learned from Managing a Petabyte
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becla, J
2005-01-20
The amount of data collected and stored by the average business doubles each year. Many commercial databases are already approaching hundreds of terabytes, and at this rate, will soon be managing petabytes. More data enables new functionality and capability, but the larger scale reveals new problems and issues hidden in ''smaller'' terascale environments. This paper presents some of these new problems along with implemented solutions in the framework of a petabyte dataset for a large High Energy Physics experiment. Through experience with two persistence technologies, a commercial database and a file-based approach, we expose format-independent concepts and issues prevalent atmore » this new scale of computing.« less
HIV stigma, disclosure and psychosocial distress among Thai youth living with HIV.
Rongkavilit, C; Wright, K; Chen, X; Naar-King, S; Chuenyam, T; Phanuphak, P
2010-02-01
The objective of the present paper is to assess stigma and to create an abbreviated 12-item Stigma Scale based on the 40-item Berger's Stigma Scale for Thai youth living with HIV (TYLH). TYLH aged 16-25 years answered the 40-item Stigma Scale and the questionnaires on mental health, social support, quality of life and alcohol/substance use. Sixty-two (88.6%) of 70 TYLH reported at least one person knowing their serostatus. Men having sex with men were more likely to disclose the diagnosis to friends (43.9% versus 6.1%, P < 0.01) and less likely to disclose to families (47.6% versus 91.8%, P < 0.01). Women were more likely to disclose to families (90.2% versus 62.1%, P < 0.01) and less likely to disclose to friends (7.3% versus 31%, P < 0.05). The 12-item Stigma Scale was reliable (Cronbach's alpha, 0.75) and highly correlated with the 40-item scale (r = 0.846, P < 0.01). Half of TYLH had mental health problems. The 12-item Stigma Scale score was significantly associated with mental health problems (beta = 0.21, P < 0.05). Public attitudes towards HIV were associated with poorer quality of life (beta = -1.41, P < 0.01) and mental health problems (beta = 1.18, P < 0.01). In conclusion, the12-item Stigma Scale was reliable for TYLH. Increasing public understanding and education could reduce stigma and improve mental health and quality of life in TYLH.
NASA Astrophysics Data System (ADS)
Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.
2018-04-01
Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
Ion beam machining error control and correction for small scale optics.
Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi
2011-09-20
Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.
Education and Training for Sustainable Tourism: Problems, Possibilities and Cautious First Steps.
ERIC Educational Resources Information Center
Gough, Stephen; Scott, William
1999-01-01
Advances a possible theoretical approach to education for sustainable tourism and describes a small-scale research project based on this approach. Seeks to integrate education for sustainable tourism into an established management curriculum using an innovative technique based on the idea of an adaptive concept. (Author/CCM)
A keyword approach to finding common ground in community-based definitions of human well-being
Ecosystem-based management involves the integration of ecosystem services and their human beneficiaries into decision making. This can occur at multiple scales; addressing global issues such as climate change down to local problems such as flood protection and maintaining water q...
An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.
Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P
2013-01-01
The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.
Rui, Zeng; Rong-Zheng, Yue; Hong-Yu, Qiu; Jing, Zeng; Xue-Hong, Wan; Chuan, Zuo
2015-01-01
Background Problem-based learning (PBL) is a pedagogical approach based on problems. Specifically, it is a student-centered, problem-oriented teaching method that is conducted through group discussions. The aim of our study is to explore the effects of PBL in diagnostic teaching for Chinese medical students. Methods A prospective, randomized controlled trial was conducted. Eighty junior clinical medical students were randomly divided into two groups. Forty students were allocated to a PBL group and another 40 students were allocated to a control group using the traditional teaching method. Their scores in the practice skills examination, ability to write and analyze medical records, and results on the stage test and behavior observation scale were compared. A questionnaire was administered in the PBL group after class. Results There were no significant differences in scores for writing medical records, content of interviewing, physical examination skills, and stage test between the two groups. However, compared with the control group, the PBL group had significantly higher scores on case analysis, interviewing skills, and behavioral observation scales. Conclusion The questionnaire survey revealed that PBL could improve interest in learning, cultivate an ability to study independently, improve communication and analytical skills, and good team cooperation spirit. However, there were some shortcomings in systematization of imparting knowledge. PBL has an obvious advantage in teaching with regard to diagnostic practice. PMID:25848334
Understanding electrical conduction in lithium ion batteries through multi-scale modeling
NASA Astrophysics Data System (ADS)
Pan, Jie
Silicon (Si) has been considered as a promising negative electrode material for lithium ion batteries (LIBs) because of its high theoretical capacity, low discharge voltage, and low cost. However, the utilization of Si electrode has been hampered by problems such as slow ionic transport, large stress/strain generation, and unstable solid electrolyte interphase (SEI). These problems severely influence the performance and cycle life of Si electrodes. In general, ionic conduction determines the rate performance of the electrode, while electron leakage through the SEI causes electrolyte decomposition and, thus, causes capacity loss. The goal of this thesis research is to design Si electrodes with high current efficiency and durability through a fundamental understanding of the ionic and electronic conduction in Si and its SEI. Multi-scale physical and chemical processes occur in the electrode during charging and discharging. This thesis, thus, focuses on multi-scale modeling, including developing new methods, to help understand these coupled physical and chemical processes. For example, we developed a new method based on ab initio molecular dynamics to study the effects of stress/strain on Li ion transport in amorphous lithiated Si electrodes. This method not only quantitatively shows the effect of stress on ionic transport in amorphous materials, but also uncovers the underlying atomistic mechanisms. However, the origin of ionic conduction in the inorganic components in SEI is different from that in the amorphous Si electrode. To tackle this problem, we developed a model by separating the problem into two scales: 1) atomistic scale: defect physics and transport in individual SEI components with consideration of the environment, e.g., LiF in equilibrium with Si electrode; 2) mesoscopic scale: defect distribution near the heterogeneous interface based on a space charge model. In addition, to help design better artificial SEI, we further demonstrated a theoretical design of multicomponent SEIs by utilizing the synergetic effect found in the natural SEI. We show that the electrical conduction can be optimized by varying the grain size and volume fraction of two phases in the artificial multicomponent SEI.
Validation of Toolkit After-Death Bereaved Family Member Interview.
Teno, J M; Clarridge, B; Casey, V; Edgman-Levitan, S; Fowler, J
2001-09-01
The purpose of this study was to examine the reliability and validity of the Toolkit After-Death Bereaved Family Member Interview to measure quality of care at the end of life from the unique perspective of family members. The survey included proposed problem scores (a count of the opportunity to improve the quality of care) and scales. Data were collected through a retrospective telephone survey with a family member who was interviewed between 3 and 6 months after the death of the patient. The setting was an outpatient hospice service, a consortium of nursing homes, and a hospital in New England. One hundred fifty-six family members from across these settings participated. The 8 proposed domains of care, as represented by problem scores or scales, were based on a conceptual model of patient-focused, family-centered medical care. The survey design emphasized face validity in order to provide actionable information to health care providers. A correlational and factor analysis was undertaken of the 8 proposed problem scores or scales. Cronbach's alpha scores varied from 0.58 to 0.87, with two problem scores (each of which had only 3 survey items) having a low alpha of 0.58. The mean item-to-total correlations for the other problem scores varied from 0.36 to 0.69, and the mean item-to-item correlations were between 0.32 and 0.70. The proposed problem scores or scales, with the exception of closure and advance care planning, demonstrated a moderate correlation (i.e., from 0.44 to 0.52) with the overall rating of satisfaction (as measured by a five-point, "excellent" to "poor" scale). Family members of persons who died with hospice service reported fewer problems in each of the six domains of medical care, gave a higher rating of the quality of care, and reported higher self-efficacy in caring for their loved ones. These results indicate that 7 of the 8 proposed problem scores or scales demonstrated psychometric properties that warrant further testing. The domain of closure demonstrated a poor correlation with overall satisfaction and requires further work. This survey could provide information to help guide quality improvement efforts to enhance the care of the dying.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Child behaviour problems and childhood illness: development of the Eczema Behaviour Checklist.
Mitchell, A E; Morawska, A; Fraser, J A; Sillar, K
2017-01-01
Children with atopic dermatitis are at increased risk of both general behaviour problems, and those specific to the condition and its treatment. This can hamper the ability of parents to carry out treatment and manage the condition effectively. To date, there is no published instrument available to assess child behaviour difficulties in the context of atopic dermatitis management. Our aim was to develop a reliable and valid instrument to assess atopic dermatitis-specific child behaviour problems, and parents' self-efficacy (confidence) for managing these behaviours. The Eczema Behaviour Checklist (EBC) was developed as a 25-item questionnaire to measure (i) extent of behaviour problems (EBC Extent scale), and (ii) parents' self-efficacy for managing behaviour problems (EBC Confidence scale), in the context of child atopic dermatitis management. A community-based sample of 292 parents completed the EBC, measures of general behaviour difficulties, self-efficacy with atopic dermatitis management and use of dysfunctional parenting strategies. There was satisfactory internal consistency and construct validity for EBC Extent and Confidence scales. There was a negative correlation between atopic dermatitis-specific behaviour problems and parents' self-efficacy for dealing with behaviours (r = -.53, p < .001). Factor analyses revealed a three-factor structure for both scales: (i) treatment-related behaviours; (ii) symptom-related behaviours; and (iii) behaviours related to impact of the illness. Variation in parents' self-efficacy for managing their child's atopic dermatitis was explained by intensity of illness-specific child behaviour problems and parents' self-efficacy for dealing with the behaviours. The new measure of atopic dermatitis-specific child behaviour problems was a stronger predictor of parents' self-efficacy for managing their child's condition than was the measure of general child behaviour difficulties. Results provide preliminary evidence of reliability and validity of the EBC, which has potential for use in clinical and research settings, and warrant further psychometric evaluation. © 2016 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Seiverling, Laura; Hendy, Helen M.; Williams, Keith
2011-01-01
The present study evaluated the 23-item Screening Tool for Feeding Problems (STEP; Matson & Kuhn, 2001) with a sample of children referred to a hospital-based feeding clinic to examine the scale's psychometric characteristics and then demonstrate how a children's revision of the STEP, the STEP-CHILD is associated with child and parent variables.…
ERIC Educational Resources Information Center
Sanders, Matthew R.; Dittman, Cassandra K.; Keown, Louise J.; Farruggia, Sue; Rose, Dennis
2010-01-01
Participants were 933 fathers participating in a large-scale household survey of parenting practices in Queensland Australia. Although the majority of fathers reported having few problems with their children, a significant minority reported behavioral and emotional problems and 5% reported that their child showed a potentially problematic level of…
Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods
NASA Astrophysics Data System (ADS)
Thurin, J.; Brossier, R.; Métivier, L.
2017-12-01
Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012
Multiobjective immune algorithm with nondominated neighbor-based selection.
Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng
2008-01-01
Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Information filtering via a scaling-based function.
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review
NASA Astrophysics Data System (ADS)
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.
Macarie, Hervé; Esquivel, Maricela; Laguna, Acela; Baron, Olivier; El Mamouni, Rachid; Guiot, Serge R; Monroy, Oscar
2017-08-26
Granulation of biomass is at the basis of the operation of the most successful anaerobic systems (UASB, EGSB and IC reactors) applied worldwide for wastewater treatment. Despite of decades of studies of the biomass granulation process, it is still not fully understood and controlled. "Degranulation/lack of granulation" is a problem that occurs sometimes in anaerobic systems resulting often in heavy loss of biomass and poor treatment efficiencies or even complete reactor failure. Such a problem occurred in Mexico in two full-scale UASB reactors treating cheese wastewater. A close follow-up of the plant was performed to try to identify the factors responsible for the phenomenon. Basically, the list of possible causes to a granulation problem that were investigated can be classified amongst nutritional, i.e. related to wastewater composition (e.g. deficiency or excess of macronutrients or micronutrients, too high COD proportion due to proteins or volatile fatty acids, high ammonium, sulphate or fat concentrations), operational (excessive loading rate, sub- or over-optimal water upflow velocity) and structural (poor hydraulic design of the plant). Despite of an intensive search, the causes of the granulation problems could not be identified. The present case remains however an example of the strategy that must be followed to identify these causes and could be used as a guide for plant operators or consultants who are confronted with a similar situation independently of the type of wastewater. According to a large literature based on successful experiments at lab scale, an attempt to artificially granulate the industrial reactor biomass through the dosage of a cationic polymer was also tested but equally failed. Instead of promoting granulation, the dosage caused a heavy sludge flotation. This shows that the scaling of such a procedure from lab to real scale cannot be advised right away unless its operability at such a scale can be demonstrated.
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
Managing distance and covariate information with point-based clustering.
Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul
2016-09-01
Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.
Priority Scale of Drainage Rehabilitation of Cilacap City
NASA Astrophysics Data System (ADS)
Rudiono, Jatmiko
2018-03-01
Characteristics of physical condition of Cilacap City is relatively flat and low to sea level (approximately 6 m above sea level). In the event of a relatively heavy rainfall resulting in inundation at several locations. The problem of inundation is a serious problem if there is in a dense residential area or occurs in publicly-used infrastructure, such as roads and settlements. These problems require improved management of which include how to plan a sustainable urban drainage system and environmentally friendly. The development of Cilacap City is increasing rapidly, this causes drainage system based on the Drainage Masterplan Cilacap made in 2006 has not been able to accommodate rain water, so, it is necessary to evaluate the drainage masterplan for subsequent rehabilitation. Priority scale rehabilitation of the drainage sections as a guideline is an urgent need of rehabilitation in the next time period.
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
Solution of second order quasi-linear boundary value problems by a wavelet method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less
Research directions in large scale systems and decentralized control
NASA Technical Reports Server (NTRS)
Tenney, R. R.
1980-01-01
Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
The accurate particle tracer code
NASA Astrophysics Data System (ADS)
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
NASA Technical Reports Server (NTRS)
Jones, William H.
1985-01-01
The Combined Aerodynamic and Structural Dynamic Problem Emulating Routines (CASPER) is a collection of data-base modification computer routines that can be used to simulate Navier-Stokes flow through realistic, time-varying internal flow fields. The Navier-Stokes equation used involves calculations in all three dimensions and retains all viscous terms. The only term neglected in the current implementation is gravitation. The solution approach is of an interative, time-marching nature. Calculations are based on Lagrangian aerodynamic elements (aeroelements). It is assumed that the relationships between a particular aeroelement and its five nearest neighbor aeroelements are sufficient to make a valid simulation of Navier-Stokes flow on a small scale and that the collection of all small-scale simulations makes a valid simulation of a large-scale flow. In keeping with these assumptions, it must be noted that CASPER produces an imitation or simulation of Navier-Stokes flow rather than a strict numerical solution of the Navier-Stokes equation. CASPER is written to operate under the Parallel, Asynchronous Executive (PAX), which is described in a separate report.
Image aesthetic quality evaluation using convolution neural network embedded learning
NASA Astrophysics Data System (ADS)
Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng
2017-11-01
A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.
Investigating Darcy-scale assumptions by means of a multiphysics algorithm
NASA Astrophysics Data System (ADS)
Tomin, Pavel; Lunati, Ivan
2016-09-01
Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative permeability) should account for the fact that they depend not only on the saturation but also on the actual characteristics of the fluid distribution.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
NASA Astrophysics Data System (ADS)
Habtu, Solomon; Ludi, Eva; Jamin, Jean Yves; Oates, Naomi; Fissahaye Yohannes, Degol
2014-05-01
Practicing various innovations pertinent to irrigated farming at local field scale is instrumental to increase productivity and yield for small holder farmers in Africa. However the translation of innovations from local scale to the scale of a jointly operated irrigation scheme is far from trivial. It requires insight on the drivers for adoption of local innovations within the wider farmer communities. Participatory methods are expected to improve not only the acceptance of locally developed innovations within the wider farmer communities, but to allow also an estimation to which extend changes will occur within the entire irrigation scheme. On such a base, more realistic scenarios of future water productivity within an irrigation scheme, which is operated by small holder farmers, can be estimated. Initial participatory problem and innovation appraisal was conducted in Gumselassa small scale irrigation scheme, Ethiopia, from Feb 27 to March 3, 2012 as part of the EAU4FOOD project funded by EC. The objective was to identify and appraise problems which hinder sustainable water management to enhance production and productivity and to identify future research strategies. Workshops were conducted both at local (Community of Practices) and regional (Learning Practice Alliance) level. At local levels, intensive collaboration with farmers using participatory methods produced problem trees and a "Photo Safari" documented a range of problems that negatively impact on productive irrigated farming. A range of participatory methods were also used to identify local innovations. At regional level a Learning Platform was established that includes a wide range of stakeholders (technical experts from various government ministries, policy makers, farmers, extension agents, researchers). This stakeholder group did a range of exercise as well to identify major problems related to irrigated smallholder farming and already identified innovations. Both groups identified similar problems to productive smallholder irrigation: soil nutrient depletion, salinization, disease and pest resulting from inefficient irrigation practices, infrastructure problems leading to a reduction of the size of the command area and decrease in reservoir volume. The major causes have been poor irrigation infrastructure, poor on-farm soil and water management, prevalence of various crop pests and diseases, lack of inputs and reservoir siltation. On-farm participatory research focusing on soil, crop and water management issues, including technical, institutional and managerial aspects, to identify best performing innovations while taking care of the environment was recommended. Currently, a range of interlinked activities are implemented a multiple scales, combining participatory and scientific approaches towards innovation development and up-scaling of promising technologies and institutional and managerial approaches from local to regional scales. ____________________________ Key words: Irrigation scheme, productivity, innovation, participatory method, Gumselassa, Ethiopia
Alduraywish, Abdulrahman Abdulwahab; Mohager, Mazin Omer; Alenezi, Mohammed Jayed; Nail, Abdelsalam Mohammed; Aljafari, Alfatih Saifudinn
2017-12-01
To evaluate the students' experience with problem-based learning. This cross-sectional, qualitative study was conducted at the College of Medicine, Al Jouf University, Sakakah, Saudi Arabia, in October 2015, and comprised medical students of the 1st to 5th levels. Interviews were conducted using Students' Course Experience Questionnaire. The questionnaire contained 37 questions covering six evaluative categories: appropriate assessment, appropriate workload, clear goals and standards, generic skills, good teaching, and overall satisfaction. The questionnaire follows the Likert's scale model. Mean values were interpreted as: >2.5= at least disagree, 2.5->3= neither/nor (uncertain), and 3 or more= at least agree. Of the 170 respondents, 72(42.7%) agreed that there was an appropriate assessment accompanied with the problem-based learning. Also, 107(63.13%) students agreed that there was a heavy workload on them. The goal and standards of the course were clear for 71(42.35%) students, 104(61.3%) agreed that problem-based learning improved their generic skills, 65(38.07%) agreed the teaching was good and 82(48.08%) students showed overall satisfaction. The students were satisfied with their experience with the problem-based learning.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Cerdá, Magdalena; Prins, Seth J.; Galea, Sandro; Howe, Chanelle J.; Pardini, Dustin
2016-01-01
Background and aims There is a documented link between common psychiatric disorders and substance use in adolescent males. This study addressed two key questions: 1) Is there a within-person association between an increase in psychiatric problems and an increase in substance use among adolescent males?; and 2) Are there sensitive periods during male adolescence when such associations are more evident? Design Analysis of longitudinal data collected annually on boys randomly selected from schools based on a comprehensive public school enrollment list from the Pittsburgh Board of Education Setting Recruitment occurred in public schools in Pittsburgh, Pennysylvania, USA. Participants 503 boys assessed at ages 13-19, average cooperation rate = 92.1% Measurements DSM-oriented affective, anxiety, and conduct disorder problems were measured with items from the caregiver, teacher, and youth version of the Achenbach scales. Scales were converted to T-scores using age- and gender-based national norms and combined by taking the average across informants. Alcohol and marijuana use were assessed semi-annually by a 16-item Substance Use Scale adapted from the National Youth Survey. Findings When male adolescents experienced a one-unit increase in their conduct problems T-score, their rate of marijuana use subsequently increased by 1.03 (95% confidence interval (CI): 1.01, 1.05), and alcohol quantity increased by 1.01 (95% CI: 1.0002, 1.02). When adolescents experienced a one-unit increase in their average quantity of alcohol use, their anxiety problems T-score subsequently increased by 0.12 (95% CI: 0.05, 0.19). These associations were strongest in early and late adolescence. Conclusions When adolescent boys experience an increase in conduct disorder problems, they are more likely to exhibit a subsequent escalation in substance use. As adolescent boys increase their intensity of alcohol use, they become more likely to develop subsequent anxiety problems. Developmental turning points such as early and late adolescence appear to be particularly sensitive periods for boys to develop comorbid patterns of psychiatric problems and substance use. PMID:26748766
Cerdá, Magdalena; Prins, Seth J; Galea, Sandro; Howe, Chanelle J; Pardini, Dustin
2016-05-01
There is a documented link between common psychiatric disorders and substance use in adolescent males. This study addressed two key questions: (1) is there a within-person association between an increase in psychiatric problems and an increase in substance use among adolescent males and (2) are there sensitive periods during male adolescence when such associations are more evident? Analysis of longitudinal data collected annually on boys selected randomly from schools based on a comprehensive public school enrollment list from the Pittsburgh Board of Education. Recruitment occurred in public schools in Pittsburgh, Pennsylvania, USA. A total of 503 boys assessed at ages 13-19 years, average cooperation rate = 92.1%. Diagnostic and Statistical Manual (DSM)-oriented affective, anxiety and conduct disorder problems were measured with items from the caregiver, teacher and youth version of the Achenbach scales. Scales were converted to t-scores using age- and gender-based national norms and combined by taking the average across informants. Alcohol and marijuana use were assessed semi-annually by a 16-item Substance Use Scale adapted from the National Youth Survey. When male adolescents experienced a 1-unit increase in their conduct problems t-score, their rate of marijuana use subsequently increased by 1.03 [95% confidence interval (CI) = 1.01, 1.05], and alcohol quantity increased by 1.01 (95% CI = 1.0002, 1.02). When adolescents experienced a 1-unit increase in their average quantity of alcohol use, their anxiety problems t-score subsequently increased by 0.12 (95% CI = 0.05, 0.19). These associations were strongest in early and late adolescence. When adolescent boys experience an increase in conduct disorder problems, they are more likely to exhibit a subsequent escalation in substance use. As adolescent boys increase their intensity of alcohol use, they become more likely to develop subsequent anxiety problems. Developmental turning points such as early and late adolescence appear to be particularly sensitive periods for boys to develop comorbid patterns of psychiatric problems and substance use. © 2016 Society for the Study of Addiction.
Analysis of mathematical problem-solving ability based on metacognition on problem-based learning
NASA Astrophysics Data System (ADS)
Mulyono; Hadiyanti, R.
2018-03-01
Problem-solving is the primary purpose of the mathematics curriculum. Problem-solving abilities influenced beliefs and metacognition. Metacognition as superordinate capabilities can direct, regulate cognition and motivation and then problem-solving processes. This study aims to (1) test and analyzes the quality of problem-based learning and (2) investigate the problem-solving capabilities based on metacognition. This research uses mixed method study with The subject research are class XI students of Mathematics and Science at High School Kesatrian 2 Semarang which divided into tacit use, aware use, strategic use and reflective use level. The collecting data using scale, interviews, and tests. The data processed with the proportion of test, t-test, and paired samples t-test. The result shows that the students with levels tacit use were able to complete the whole matter given, but do not understand what and why a strategy is used. Students with aware use level were able to solve the problem, be able to build new knowledge through problem-solving to the indicators, understand the problem, determine the strategies used, although not right. Students on the Strategic ladder Use can be applied and adopt a wide variety of appropriate strategies to solve the issues and achieved re-examine indicators of process and outcome. The student with reflective use level is not found in this study. Based on the results suggested that study about the identification of metacognition in problem-solving so that the characteristics of each level of metacognition more clearly in a more significant sampling. Teachers need to know in depth about the student metacognitive activity and its relationship with mathematical problem solving and another problem resolution.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Aerts, L; Christiaens, M R; Enzlin, P; Neven, P; Amant, F
2014-10-01
Breast cancer (BC) and/or its treatments may affect sexual functioning based on physiological and psychosocial mechanisms. The aim of this study was to prospectively investigate sexual adjustment of BC patients during a follow-up period of one year after mastectomy (ME) or breast conserving therapy (BCT). In this prospective controlled study, women with BC and an age-matched control group of healthy women completed the Beck Depression Inventory Scale, World Health Organization 5 Well-being scale, Body Image Scale, EORTC QLQ questionnaire, Dyadic Adjustment Scale, Short Sexual Functioning Scale and Specific Sexual Problems Questionnaire to assess various aspects of sexual and psychosocial functioning before surgery, six months and one year after surgical treatment. In total, 149 women with BC and 149 age-matched healthy controls completed the survey. Compared to the situation before surgery, significantly more BCT women reported problems with sexual arousal six months after surgery and significantly more women of the ME group reported problems with sexual desire, arousal and the ability to achieve an orgasm six months and one year after surgery. While in comparison with healthy controls, no significant differences in sexual functioning were found after BCT surgery, significantly more women who underwent ME reported problems with sexual desire, arousal, the ability to achieve an orgasm and intensity of the orgasm. Although little differences were seen in sexual functioning in the BCT group during prospective analyses and in comparison with healthy controls, analyses revealed that women who underwent a ME were at risk for post-operative sexual dysfunctions. Copyright © 2014. Published by Elsevier Ltd.
Dark matter self-interactions and small scale structure
NASA Astrophysics Data System (ADS)
Tulin, Sean; Yu, Hai-Bo
2018-02-01
We review theories of dark matter (DM) beyond the collisionless paradigm, known as self-interacting dark matter (SIDM), and their observable implications for astrophysical structure in the Universe. Self-interactions are motivated, in part, due to the potential to explain long-standing (and more recent) small scale structure observations that are in tension with collisionless cold DM (CDM) predictions. Simple particle physics models for SIDM can provide a universal explanation for these observations across a wide range of mass scales spanning dwarf galaxies, low and high surface brightness spiral galaxies, and clusters of galaxies. At the same time, SIDM leaves intact the success of ΛCDM cosmology on large scales. This report covers the following topics: (1) small scale structure issues, including the core-cusp problem, the diversity problem for rotation curves, the missing satellites problem, and the too-big-to-fail problem, as well as recent progress in hydrodynamical simulations of galaxy formation; (2) N-body simulations for SIDM, including implications for density profiles, halo shapes, substructure, and the interplay between baryons and self-interactions; (3) semi-analytic Jeans-based methods that provide a complementary approach for connecting particle models with observations; (4) merging systems, such as cluster mergers (e.g., the Bullet Cluster) and minor infalls, along with recent simulation results for mergers; (5) particle physics models, including light mediator models and composite DM models; and (6) complementary probes for SIDM, including indirect and direct detection experiments, particle collider searches, and cosmological observations. We provide a summary and critical look for all current constraints on DM self-interactions and an outline for future directions.
ERIC Educational Resources Information Center
Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima
2009-01-01
A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
The aggregated unfitted finite element method for elliptic problems
NASA Astrophysics Data System (ADS)
Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.
2018-07-01
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
The Development and Application of the Coping with Bullying Scale for Children
ERIC Educational Resources Information Center
Parris, Leandra N.
2013-01-01
The Multidimensional Model for Coping with Bullying (MMCB; Parris, in development) was conceptualized based on a literature review of coping with bullying and by combining relevant aspects of previous models. Strategies were described based on their focus (problem-focused vs. emotion-focused) and orientation (avoidance, approach-self,…
Large-Scale Constraint-Based Pattern Mining
ERIC Educational Resources Information Center
Zhu, Feida
2009-01-01
We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…
A Case of Problem Based Learning for Cross-Institutional Collaboration
ERIC Educational Resources Information Center
Nerantzi, Chrissi
2012-01-01
The idea of moving away from battery-type Academic Development Activities and silo modules and programmes towards open cross-institutional approaches in line with OEP are explored within this paper based on a recent small-scale, fully-online study. This brought together academics and other professionals who support learning, from different…
Dynamic Control of Facts Devices to Enable Large Scale Penetration of Renewable Energy Resources
NASA Astrophysics Data System (ADS)
Chavan, Govind Sahadeo
This thesis focuses on some of the problems caused by large scale penetration of Renewable Energy Resources within EHV transmission networks, and investigates some approaches in resolving these problems. In chapter 4, a reduced-order model of the 500 kV WECC transmission system is developed by estimating its key parameters from phasor measurement unit (PMU) data. The model was then implemented in RTDS and was investigated for its accuracy with respect to the PMU data. Finally it was tested for observing the effects of various contingencies like transmission line loss, generation loss and large scale penetration of wind farms on EHV transmission systems. Chapter 5 introduces Static Series Synchronous Compensators (SSSC) which are seriesconnected converters that can control real power flow along a transmission line. A new application of SSSCs in mitigating Ferranti effect on unloaded transmission lines was demonstrated on PSCAD. A new control scheme for SSSCs based on the Cascaded H-bridge (CHB) converter configuration was proposed and was demonstrated using PSCAD and RTDS. A new centralized controller was developed for the distributed SSSCs based on some of the concepts used in the CHB-based SSSC. The controller's efficacy was demonstrated using RTDS. Finally chapter 6 introduces the problem of power oscillations induced by renewable sources in a transmission network. A power oscillation damping (POD) controller is designed using distributed SSSCs in NYPA's 345 kV three-bus AC system and its efficacy is demonstrated in PSCAD. A similar POD controller is then designed for the CHB-based SSSC in the IEEE 14 bus system in PSCAD. Both controllers were noted to have significantly damped power oscillations in the transmission networks.
Development and Initial Psychometric Evaluation of the Sport Interference Checklist
ERIC Educational Resources Information Center
Donohue, Brad; Silver, N. Clayton; Dickens, Yani; Covassin, Tracey; Lancer, Kevin
2007-01-01
The Sport Interference Checklist (SIC) was developed in 141 athletes to assist in the concurrent assessment of cognitive and behavioral problems experienced by athletes in both training (Problems in Sports Training Scale, PSTS) and competition (Problems in Sports Competition Scale, PSCS). An additional scale (Desire for Sport Psychology Scale,…
Behavioral health needs and problem recognition by older adults receiving home-based aging services.
Gum, Amber M; Petkus, Andrew; McDougal, Sarah J; Present, Melanie; King-Kallimanis, Bellinda; Schonfeld, Lawrence
2009-04-01
Older adults' recognition of a behavioral health need is one of the strongest predictors of their use of behavioral health services. Thus, study aims were to examine behavioral health problems in a sample of older adults receiving home-based aging services, their recognition of behavioral health problems, and covariates of problem recognition. The study design was cross-sectional. Older adults (n = 141) receiving home-based aging services completed interviews that included: Structured Clinical Interview for DSM-IV; Brief Symptom Inventory-18; attitudinal scales of stigma, expectations regarding aging, and thought suppression; behavioral health treatment experience; and questions about recognition of behavioral health problems. Thirty (21.9%) participants received an Axis I diagnosis (depressive, anxiety, or substance); another 17 (12.1%) were diagnosed with an adjustment disorder. Participants were more likely to recognize having a problem if they had an Axis I diagnosis, more distress on the BSI-18, family member or friend with a behavioral health problem, and greater thought suppression. In logistic regression, participants who identified a family member or friend with a behavioral health problem were more likely to identify having a behavioral health problem themselves. Findings suggest that older adults receiving home-based aging services who recognize behavioral health problems are more likely to have a psychiatric diagnosis or be experiencing significant distress, and they are more familiar with behavioral health problems in others. This familiarity may facilitate treatment planning; thus, older adults with behavioral health problems who do not report familiarity of problems in others likely require additional education. (c) 2008 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less
Assessing local instrument reliability and validity: a field-based example from northern Uganda.
Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul
2009-08-01
This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.
Lara, Alvaro R; Galindo, Enrique; Ramírez, Octavio T; Palomares, Laura A
2006-11-01
The presence of spatial gradients in fundamental culture parameters, such as dissolved gases, pH, concentration of substrates, and shear rate, among others, is an important problem that frequently occurs in large-scale bioreactors. This problem is caused by a deficient mixing that results from limitations inherent to traditional scale-up methods and practical constraints during large-scale bioreactor design and operation. When cultured in a heterogeneous environment, cells are continuously exposed to fluctuating conditions as they travel through the various zones of a bioreactor. Such fluctuations can affect cell metabolism, yields, and quality of the products of interest. In this review, the theoretical analyses that predict the existence of environmental gradients in bioreactors and their experimental confirmation are reviewed. The origins of gradients in common culture parameters and their effects on various organisms of biotechnological importance are discussed. In particular, studies based on the scale-down methodology, a convenient tool for assessing the effect of environmental heterogeneities, are surveyed.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Problem severity and motivation for treatment in incarcerated substance abusers.
Hiller, Matthew L; Narevic, Egle; Webster, J Matthew; Rosen, Paul; Staton, Michele; Leukefeld, Carl; Garrity, Thomas F; Kayo, Rebecca
2009-01-01
Studies of community-based treatment programs for substance users document that motivation for treatment is a consistent predictor of clients remaining under treatment for a longer period of time. Recent research has replicated this in prison-based treatment programs, implying that motivation is clinically important regardless of setting. The current study examines predictors of treatment motivation using data collected from 661 male drug-involved inmates during in-depth interviews that include components of the Addiction Severity Index, TCU Motivation Scale, and the Heath Services Research Instrument. Findings showed treatment motivation can be measured effectively in prison-based settings. Motivation scores were not significantly different between individuals in a prison-based treatment program and those in the general prison population. Furthermore, higher motivation for treatment scores were associated with greater levels of problem severity, suggesting that individuals with more drug-use related life problems may recognize this need and desire help for beginning long-term recovery.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
NASA Astrophysics Data System (ADS)
Wisniewski, Nicholas Andrew
This dissertation is divided into two parts. First we present an exact solution to a generalization of the Behrens-Fisher problem by embedding the problem in the Riemannian manifold of Normal distributions. From this we construct a geometric hypothesis testing scheme. Secondly we investigate the most commonly used geometric methods employed in tensor field interpolation for DT-MRI analysis and cardiac computer modeling. We computationally investigate a class of physiologically motivated orthogonal tensor invariants, both at the full tensor field scale and at the scale of a single interpolation by doing a decimation/interpolation experiment. We show that Riemannian-based methods give the best results in preserving desirable physiological features.
Sex differences and gender-invariance of mother-reported childhood problem behavior.
van der Sluis, Sophie; Polderman, Tinca J C; Neale, Michael C; Verhulst, Frank C; Posthuma, Danielle; Dieleman, Gwen C
2017-09-01
Prevalence and severity of childhood behavioral problems differ between boys and girls, and in psychiatry, testing for gender differences is common practice. Population-based studies show that many psychopathology scales are (partially) Measurement Invariance (MI) with respect to gender, i.e. are unbiased. It is, however, unclear whether these studies generalize towards clinical samples. In a psychiatric outpatient sample, we tested whether the Child Behavior Checklist 6-18 (CBCL) is unbiased with respect to gender. We compared mean scores across gender of all syndrome scales of the CBCL in 3271 patients (63.3% boys) aged 6-18. Second, we tested for MI on both the syndrome scale and the item-level using a stepwise modeling procedure. Six of the eight CBCL syndrome scales included one or more gender-biased items (12.6% of all items), resulting in slight over- or under-estimation of the absolute gender difference in mean scores. Two scales, Somatic Complaints and Rule-breaking Behavior, contained no biased items. The CBCL is a valid instrument to measure gender differences in problem behavior in children and adolescents from a clinical sample; while various gender-biased items were identified, the resulting bias was generally clinically irrelevant, and sufficient items per subscale remained after exclusion of biased items. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Scale-down/scale-up studies leading to improved commercial beer fermentation.
Nienow, Alvin W; Nordkvist, Mikkel; Boulton, Christopher A
2011-08-01
Scale-up/scale-down techniques are vital for successful and safe commercial-scale bioprocess design and operation. An example is given in this review of recent studies related to beer production. Work at the bench scale shows that brewing yeast is not compromised by mechanical agitation up to 4.5 W/kg; and that compared with fermentations mixed by CO(2) evolution, agitation ≥ 0.04 W/kg is able to reduce fermentation time by about 20%. Work at the commercial scale in cylindroconical fermenters shows that, without mechanical agitation, most of the yeast sediments into the cone for about 50% of the fermentation time, leading to poor temperature control. Stirrer mixing overcomes these problems and leads to a similar reduction in batch time as the bench-scale tests and greatly reduces its variability, but is difficult to install in extant fermenters. The mixing characteristics of a new jet mixer, a rotary jet mixer, which overcomes these difficulties, are reported, based on pilot-scale studies. This change enables the advantages of stirring to be achieved at the commercial scale without the problems. In addition, more of the fermentable sugars are converted into ethanol. This review shows the effectiveness of scale-up/scale-down studies for improving commercial operations. Suggestions for further studies are made: one concerning the impact of homogenization on the removal of vicinal diketones and the other on the location of bubble formation at the commercial scale. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Distributed intrusion detection system based on grid security model
NASA Astrophysics Data System (ADS)
Su, Jie; Liu, Yahui
2008-03-01
Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
NASA Astrophysics Data System (ADS)
Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu
2017-06-01
Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing data and it is helpful to analyze the observation scale from different aspects. This research will ultimately benefit for remote sensing data selection and application.
Gust, Nicole; Koglin, Ute; Petermann, Franz
2015-01-01
The present study examines the relation between knowledge of emotion regulation strategies and social behavior in preschoolers. Knowledge of emotion regulation strategies of 210 children (mean age 55 months) was assessed. Teachers rated children's social behavior with SDQ. Linear regression analysis examined how knowledge of emotion regulation strategies influenced social behavior of children. Significant effects of gender on SDQ scales "prosocial behavior", "hyperactivity", "behavior problems", and SDQ total problem scale were identified. Age was a significant predictor of SDQ scales "prosocial behavior", "hyperactivity", "problems with peers" and SDQ total problem scale. Knowledge of emotion regulation strategies predicted SDQ total problem scores. Results suggest that deficits in knowledge of emotion regulation strategies are linked with increased problem behavior.
Welch, Brandon; Brinda, FNU
2017-01-01
Background Telemedicine is the use of technology to provide and support health care when distance separates the clinical service and the patient. Home-based telemedicine systems involve the use of such technology for medical support and care connecting the patient from the comfort of their homes with the clinician. In order for such a system to be used extensively, it is necessary to understand not only the issues faced by the patients in using them but also the clinician. Objectives The aim of this study was to conduct a heuristic evaluation of 4 telemedicine software platforms—Doxy.me, Polycom, Vidyo, and VSee—to assess possible problems and limitations that could affect the usability of the system from the clinician’s perspective. Methods It was found that 5 experts individually evaluated all four systems using Nielsen’s list of heuristics, classifying the issues based on a severity rating scale. Results A total of 46 unique problems were identified by the experts. The heuristics most frequently violated were visibility of system status and Error prevention amounting to 24% (11/46 issues) each. Esthetic and minimalist design was second contributing to 13% (6/46 issues) of the total errors. Conclusions Heuristic evaluation coupled with a severity rating scale was found to be an effective method for identifying problems with the systems. Prioritization of these problems based on the rating provides a good starting point for resolving the issues affecting these platforms. There is a need for better transparency and a more streamlined approach for how physicians use telemedicine systems. Visibility of the system status and speaking the users’ language are keys for achieving this. PMID:28438724
DESCRIPTION OF THE ENIAC CONVERTER CODE
The report is intended as a working manual for personnel preparing problems for the ENIAC . It should also serve as a guide to those groups who have...computing problems that could be solved on the ENIAC . The report discusses the ENIAC from the point of view of the coder, describing its memory as well...accomplishes as well as how to use each instruction. A few remarks are made on the more general subject of problem preparation for large scale computers in general based on the experience of operating the ENIAC . (Author)
The Waterfall Model in Large-Scale Development
NASA Astrophysics Data System (ADS)
Petersen, Kai; Wohlin, Claes; Baca, Dejan
Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.
SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...
ERIC Educational Resources Information Center
van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf
2009-01-01
This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…
Minimax estimation of qubit states with Bures risk
NASA Astrophysics Data System (ADS)
Acharya, Anirudh; Guţă, Mădălin
2018-04-01
The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.
The Sun at high spatial resolution: The physics of small spatial structures in a magnetized medium
NASA Technical Reports Server (NTRS)
Rosner, R. T.
1986-01-01
An attempt is made to provide a perspective on the problem of spatial structuring on scales smaller than can presently be directly and regularly observed from the ground or with which current space-based instrumentation can be anticipated. There is abundant evidence from both observations and theory that such spatial structuring of the solar outer atmosphere is ubiquitous not only on the observed scales, but also on spatial scales down to (at least) the subarcsecond range. This is not to say that the results to be obtained from observations on these small scales can be anticipated: quite the opposite. What is clear instead is that many of the classic problems of coronal and chromospheric activity - involving the basic dissipative nature of magnetized plasmas - will be seen from a novel perspective at these scales, and that there are reasons for believing that dynamical processes of importance to activity on presently-resolved scales will themselves begin to be resolved on the sub-arcsecond level. Since the Sun is the only astrophysical laboratory for which there is any hope of studying these processes in any detail, this observatioinal opportunity is an exciting prospect for any student of magnetic activity in astrophysics.
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
Large scale systems : a study of computer organizations for air traffic control applications.
DOT National Transportation Integrated Search
1971-06-01
Based on current sizing estimates and tracking algorithms, some computer organizations applicable to future air traffic control computing systems are described and assessed. Hardware and software problem areas are defined and solutions are outlined.
DOT National Transportation Integrated Search
2016-11-28
Intelligent Compaction (IC) is considered to be an innovative technology intended to address some of the problems associated with conventional compaction methods of earthwork (e.g. stiffnessbased measurements instead of density-based measurements). I...
Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2014-01-01
Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.
NASA Astrophysics Data System (ADS)
Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang
2016-05-01
For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.
Anger Expression Types and Interpersonal Problems in Nurses.
Han, Aekyung; Won, Jongsoon; Kim, Oksoo; Lee, Sang E
2015-06-01
The purpose of this study was to investigate the anger expression types in nurses and to analyze the differences between the anger expression types and interpersonal problems. The data were collected from 149 nurses working in general hospitals with 300 beds or more in Seoul or Gyeonggi province, Korea. For anger expression type, the anger expression scale from the Korean State-Trait Anger Expression Inventory was used. For interpersonal problems, the short form of the Korean Inventory of Interpersonal Problems Circumplex Scales was used. Data were analyzed using descriptive statistics, cluster analysis, multivariate analysis of variance, and Duncan's multiple comparisons test. Three anger expression types in nurses were found: low-anger expression, anger-in, and anger-in/control type. From the results of multivariate analysis of variance, there were significant differences between anger expression types and interpersonal problems (Wilks lambda F = 3.52, p < .001). Additionally, anger-in/control type was found to have the most difficulty with interpersonal problems by Duncan's post hoc test (p < .050). Based on this research, the development of an anger expression intervention program for nurses is recommended to establish the means of expressing the suppressed emotions, which would help the nurses experience less interpersonal problems. Copyright © 2015. Published by Elsevier B.V.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Preeti, Bajaj; Ashish, Ahuja; Shriram, Gosavi
2013-12-01
As the "Science of Medicine" is getting advanced day-by-day, need for better pedagogies & learning techniques are imperative. Problem Based Learning (PBL) is an effective way of delivering medical education in a coherent, integrated & focused manner. It has several advantages over conventional and age-old teaching methods of routine. It is based on principles of adult learning theory, including student's motivation, encouragement to set goals, think critically about decision making in day-to-day operations. Above all these, it stimulates challenge acceptance and learning curiosity among students and creates pragmatic educational program. To measure the effectiveness of the "Problem Based Learning" as compared to conventional theory/didactic lectures based learning. The study was conducted on 72 medical students from Dayanand Medical College & Hospital, Ludhiana. Two modules of problem based sessions designed and delivered. Pre & Post-test score's scientific statistical analysis was done. Student feed-back received based on questionnaire in the five-point Likert scale format. Significant improvement in overall performance observed. Feedback revealed majority agreement that "Problem-based learning" helped them create interest (88.8 %), better understanding (86%) & promotes self-directed subject learning (91.6 %). Substantial improvement in the post-test scores clearly reveals acceptance of PBL over conventional learning. PBL ensures better practical learning, ability to create interest, subject understanding. It is a modern-day educational strategy, an effective tool to objectively improve the knowledge acquisition in Medical Teaching.
Experimental Replication of an Aeroengine Combustion Instability
NASA Technical Reports Server (NTRS)
Cohen, J. M.; Hibshman, J. R.; Proscia, W.; Rosfjord, T. J.; Wake, B. E.; McVey, J. B.; Lovett, J.; Ondas, M.; DeLaat, J.; Breisacher, K.
2000-01-01
Combustion instabilities in gas turbine engines are most frequently encountered during the late phases of engine development, at which point they are difficult and expensive to fix. The ability to replicate an engine-traceable combustion instability in a laboratory-scale experiment offers the opportunity to economically diagnose the problem (to determine the root cause), and to investigate solutions to the problem, such as active control. The development and validation of active combustion instability control requires that the causal dynamic processes be reproduced in experimental test facilities which can be used as a test bed for control system evaluation. This paper discusses the process through which a laboratory-scale experiment was designed to replicate an instability observed in a developmental engine. The scaling process used physically-based analyses to preserve the relevant geometric, acoustic and thermo-fluid features. The process increases the probability that results achieved in the single-nozzle experiment will be scalable to the engine.
[Job stress and well-being of care providers: development of a standardized survey instrument].
Kivimäki, M; Lindström, K
1992-01-01
The main aim was to develop a standardized survey instrument for measuring job stress and well-being in hospital settings. The actual study group consisted of 349 workers from medical bed wards, first aid unit wards and bed wards for gynecology and obstetrics in a middle-sized hospital in the Helsinki region. Based on the factor analysis of separate questions, the following content areas were chosen for the job stressor scales: haste at work, problems in interpersonal relations at work, problems in occupational collaboration with others, too much responsibility, safety and health risks, lack of appreciation, troublesome patients, and lack of equipment and resources. Content areas for well-being scales and items were general job satisfaction, strain symptoms, perceived mental and physical work load. The reference values of the questionnaire and reliabilities for the scales were calculated. The application and further development of the questionnaire was discussed.
PetIGA: A framework for high-performance isogeometric analysis
Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...
2016-05-25
We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less
ERIC Educational Resources Information Center
Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard
2014-01-01
Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…
Study on the millimeter-wave scale absorber based on the Salisbury screen
NASA Astrophysics Data System (ADS)
Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan
2018-03-01
In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
K-State Problem Identification Rating Scales for College Students
ERIC Educational Resources Information Center
Robertson, John M.; Benton, Stephen L.; Newton, Fred B.; Downey, Ronald G.; Marsh, Patricia A.; Benton, Sheryl A.; Tseng, Wen-Chih; Shin, Kang-Hyun
2006-01-01
The K-State Problem Identification Rating Scales, a new screening instrument for college counseling centers, gathers information about clients' presenting symptoms, functioning levels, and readiness to change. Three studies revealed 7 scales: Mood Difficulties, Learning Problems, Food Concerns, Interpersonal Conflicts, Career Uncertainties,…
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
CABINS: Case-based interactive scheduler
NASA Technical Reports Server (NTRS)
Miyashita, Kazuo; Sycara, Katia
1992-01-01
In this paper we discuss the need for interactive factory schedule repair and improvement, and we identify case-based reasoning (CBR) as an appropriate methodology. Case-based reasoning is the problem solving paradigm that relies on a memory for past problem solving experiences (cases) to guide current problem solving. Cases similar to the current case are retrieved from the case memory, and similarities and differences of the current case to past cases are identified. Then a best case is selected, and its repair plan is adapted to fit the current problem description. If a repair solution fails, an explanation for the failure is stored along with the case in memory, so that the user can avoid repeating similar failures in the future. So far we have identified a number of repair strategies and tactics for factory scheduling and have implemented a part of our approach in a prototype system, called CABINS. As a future work, we are going to scale up CABINS to evaluate its usefulness in a real manufacturing environment.
NASA Astrophysics Data System (ADS)
Popov, K. I.; Kovaleva, N. E.; Rudakova, G. Ya.; Kombarova, S. P.; Larchenko, V. E.
2016-02-01
Scale formation is a challenge worldwide. Recently, scale inhibitors represent the best solution of this problem. The polyaminocarboxylic acids have been the first to be successfully applied in the field, although their efficacy was rather low. The next generation was developed on the grounds of polyphosphonic acids. The main disadvantage of these is associated with low biodegradation level. Polyacrylate-based phosphorous free inhibitors proposed as an alternative to phosphonates all also had low biodegradability. Thus, the main trend of recent R&D is the development of a new generation: environmentally friendly biodegradable scale inhibitors. The recent state of the word and domestic scale inhibitors markets is considered, the main industrial inhibitors manufacturers and marketed substances, as well as the general trends of R&D in the field, are characterized. It is demonstrated that most research is focused on biodegradable polymers and on phosponates with low phosphorus content, as well as on implementation of biodegradable fragments into polyacrylate matrixes for biodegradability enhancement. The problem of research results comparability is indicated along with domestic-made inhibitors quality and the gaps in scale inhibition mechanism. The actuality of fluorescent indicator fragment implementation into the scale inhibitor molecule for the better reagent monitoring in a cooling water system is specially emphasized.
Sourander, Andre; McGrath, Patrick J; Ristkari, Terja; Cunningham, Charles; Huttunen, Jukka; Lingley-Pottie, Patricia; Hinkka-Yli-Salomäki, Susanna; Kinnunen, Malin; Vuorio, Jenni; Sinokki, Atte; Fossum, Sturla; Unruh, Anita
2016-04-01
There is a large gap worldwide in the provision of evidence-based early treatment of children with disruptive behavioral problems. To determine whether an Internet-assisted intervention using whole-population screening that targets the most symptomatic 4-year-old children is effective at 6 and 12 months after the start of treatment. This 2-parallel-group randomized clinical trial was performed from October 1, 2011, through November 30, 2013, at a primary health care clinic in Southwest Finland. Data analysis was performed from August 6, 2015, to December 11, 2015. Of a screened population of 4656 children, 730 met the screening criteria indicating a high level of disruptive behavioral problems. A total of 464 parents of 4-year-old children were randomized into the Strongest Families Smart Website (SFSW) intervention group (n = 232) or an education control (EC) group (n = 232). The SFSW intervention, an 11-session Internet-assisted parent training program that included weekly telephone coaching. Child Behavior Checklist version for preschool children (CBCL/1.5-5) externalizing scale (primary outcome), other CBCL/1.5-5 scales and subscores, Parenting Scale, Inventory of Callous-Unemotional Traits, and the 21-item Depression, Anxiety, and Stress Scale. All data were analyzed by intention to treat and per protocol. The assessments were made before randomization and 6 and 12 months after randomization. Of the children randomized, 287 (61.9%) were male and 79 (17.1%) lived in other than a family with 2 biological parents. At 12-month follow-up, improvement in the SFSW intervention group was significantly greater compared with the control group on the following measures: CBCL/1.5-5 externalizing scale (effect size, 0.34; P < .001), internalizing scale (effect size, 0.35; P < .001), and total scores (effect size, 0.37; P < .001); 5 of 7 syndrome scales, including aggression (effect size, 0.36; P < .001), sleep (effect size, 0.24; P = .002), withdrawal (effect size, 0.25; P = .005), anxiety (effect size, 0.26; P = .003), and emotional problems (effect size, 0.31; P = .001); Inventory of Callous-Unemotional Traits callousness scores (effect size, 0.19; P = .03); and self-reported parenting skills (effect size, 0.53; P < .001). The study reveals the effectiveness and feasibility of an Internet-assisted parent training intervention offered for parents of preschool children with disruptive behavioral problems screened from the whole population. The strategy of population-based screening of children at an early age to offering parent training using digital technology and telephone coaching is a promising public health strategy for providing early intervention for a variety of child mental health problems. clinicaltrials.gov Identifier: NCT01750996.
NASA Astrophysics Data System (ADS)
Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli
2017-05-01
This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.
Triplet supertree heuristics for the tree of life
Lin, Harris T; Burleigh, J Gordon; Eulenstein, Oliver
2009-01-01
Background There is much interest in developing fast and accurate supertree methods to infer the tree of life. Supertree methods combine smaller input trees with overlapping sets of taxa to make a comprehensive phylogenetic tree that contains all of the taxa in the input trees. The intrinsically hard triplet supertree problem takes a collection of input species trees and seeks a species tree (supertree) that maximizes the number of triplet subtrees that it shares with the input trees. However, the utility of this supertree problem has been limited by a lack of efficient and effective heuristics. Results We introduce fast hill-climbing heuristics for the triplet supertree problem that perform a step-wise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. To realize time efficient heuristics we designed the first nontrivial algorithms for two standard search problems, which greatly improve on the time complexity to the best known (naïve) solutions by a factor of n and n2 (the number of taxa in the supertree). These algorithms enable large-scale supertree analyses based on the triplet supertree problem that were previously not possible. We implemented hill-climbing heuristics that are based on our new algorithms, and in analyses of two published supertree data sets, we demonstrate that our new heuristics outperform other standard supertree methods in maximizing the number of triplets shared with the input trees. Conclusion With our new heuristics, the triplet supertree problem is now computationally more tractable for large-scale supertree analyses, and it provides a potentially more accurate alternative to existing supertree methods. PMID:19208181
The closure problem for turbulence in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1985-01-01
The dependent variables used for computer based meteorological predictions and in plans for oceanographic predictions are wave number and frequency filtered values that retain only scales resolvable by the model. Scales unresolvable by the grid in use become 'turbulence'. Whether or not properly processed data are used for initial values is important, especially for sparce data. Fickian diffusion with a constant eddy diffusion is used as a closure for many of the present models. A physically realistic closure based on more modern turbulence concepts, especially one with a reverse cascade at the right times and places, could help improve predictions.
Study of alumina-trichite reinforcement of a nickel-based matric by means of powder metallurgy
NASA Technical Reports Server (NTRS)
Walder, A.; Hivert, A.
1982-01-01
Research was conducted on reinforcing nickel based matrices with alumina trichites by using powder metallurgy. Alumina trichites previously coated with nickel are magnetically aligned. The felt obtained is then sintered under a light pressure at a temperature just below the melting point of nickel. The halogenated atmosphere technique makes it possible to incorporate a large number of additive elements such as chromium, titanium, zirconium, tantalum, niobium, aluminum, etc. It does not appear that going from laboratory scale to a semi-industrial scale in production would create any major problems.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
A tilted cold dark matter cosmological scenario
NASA Technical Reports Server (NTRS)
Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.
1992-01-01
A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.
NASA Astrophysics Data System (ADS)
Faug, Thierry
2017-04-01
The Rankine-Hugoniot jump conditions traditionally describe the theoretical relationship between the equilibrium state on both sides of a shock-wave. They are based on the crucial assumption that the length-scale needed to adjust the equilibrium state upstream of the shock to downstream of it is too small to be of significance to the problem. They are often used with success to describe the shock-waves in a number of applications found in both fluid and solid mechanics. However, the relations based on jump conditions at singular surfaces may fail to capture some features of the shock-waves formed in complex materials, such as granular matter. This study addresses the particular problem of compressible shock-waves formed in flows of dry granular materials down a slope. This problem is for instance relevant to full-scale geophysical granular flows in interaction with natural obstacles or man-made structures, such as topographical obstacles or mitigation dams respectively. Steady-state jumps formed in granular flows and travelling shock-waves produced at the impact of a granular avalanche-flow with a rigid wall are considered. For both situations, new analytical relations which do not consider that the granular shock-wave shrinks into a singular surface are derived, by using balance equations in their depth-averaged forms for mass and momentum. However, these relations need additional inputs that are closure relations for the size and the shape of the shock-wave, and a relevant constitutive friction law. Small-scale laboratory tests and numerical simulations based on the discrete element method are shortly presented and used to infer crucial information needed for the closure relations. This allows testing some predictive aspects of the simple analytical approach proposed for both steady-state and travelling shock-waves formed in free-surface flows of dry granular materials down a slope.
NASA Astrophysics Data System (ADS)
Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao
2018-04-01
Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.
Information Filtering via a Scaling-Based Function
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829
An efficient flexible-order model for 3D nonlinear water waves
NASA Astrophysics Data System (ADS)
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
Hierarchical coarse-graining transform.
Pancaldi, Vera; King, Peter R; Christensen, Kim
2009-03-01
We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.
Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling
NASA Astrophysics Data System (ADS)
Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.
2014-12-01
Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.
Villar, Oscar Armando Esparza-Del; Montañez-Alvarado, Priscila; Gutiérrez-Vega, Marisela; Carrillo-Saucedo, Irene Concepción; Gurrola-Peña, Gloria Margarita; Ruvalcaba-Romero, Norma Alicia; García-Sánchez, María Dolores; Ochoa-Alcaraz, Sergio Gabriel
2017-03-01
Mexico is one of the countries with the highest rates of overweight and obesity around the world, with 68.8% of men and 73% of women reporting both. This is a public health problem since there are several health related consequences of not exercising, like having cardiovascular diseases or some types of cancers. All of these problems can be prevented by promoting exercise, so it is important to evaluate models of health behaviors to achieve this goal. Among several models the Health Belief Model is one of the most studied models to promote health related behaviors. This study validates the first exercise scale based on the Health Belief Model (HBM) in Mexicans with the objective of studying and analyzing this model in Mexico. Items for the scale called the Exercise Health Belief Model Scale (EHBMS) were developed by a health research team, then the items were applied to a sample of 746 participants, male and female, from five cities in Mexico. The factor structure of the items was analyzed with an exploratory factor analysis and the internal reliability with Cronbach's alpha. The exploratory factor analysis reported the expected factor structure based in the HBM. The KMO index (0.92) and the Barlett's sphericity test (p < 0.01) indicated an adequate and normally distributed sample. Items had adequate factor loadings, ranging from 0.31 to 0.92, and the internal consistencies of the factors were also acceptable, with alpha values ranging from 0.67 to 0.91. The EHBMS is a validated scale that can be used to measure exercise based on the HBM in Mexican populations.
Almuneef, Maha A; Qayad, Mohamed; Noor, Ismail K; Al-Eissa, Majid A; Albuhairan, Fadia S; Inam, Sarah; Mikton, Christopher
2014-03-01
There has been increased awareness of child maltreatment in Saudi Arabia recently. This study assessed the readiness for implementing large-scale evidence-based child maltreatment prevention programs in Saudi Arabia. Key informants, who were key decision makers and senior managers in the field of child maltreatment, were invited to participate in the study. A multidimensional tool, developed by WHO and collaborators from several middle and low income countries, was used to assess 10 dimensions of readiness. A group of experts also gave an objective assessment of the 10 dimensions and key informants' and experts' scores were compared. On a scale of 100, the key informants gave a readiness score of 43% for Saudi Arabia to implement large-scale, evidence-based CM prevention programs, and experts gave an overall readiness score of 40%. Both the key informants and experts agreed that 4 of the dimensions (attitudes toward child maltreatment prevention, institutional links and resources, material resources, and human and technical resources) had low readiness scores (<5) each and three dimensions (knowledge of child maltreatment prevention, scientific data on child maltreatment prevention, and will to address child maltreatment problem) had high readiness scores (≥5) each. There was significant disagreement between key informants and experts on the remaining 3 dimensions. Overall, Saudi Arabia has a moderate/fair readiness to implement large-scale child maltreatment prevention programs. Capacity building; strengthening of material resources; and improving institutional links, collaborations, and attitudes toward the child maltreatment problem are required to improve the country's readiness to implement such programs. Copyright © 2013 Elsevier Ltd. All rights reserved.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
An Effective Evolutionary Approach for Bicriteria Shortest Path Routing Problems
NASA Astrophysics Data System (ADS)
Lin, Lin; Gen, Mitsuo
Routing problem is one of the important research issues in communication network fields. In this paper, we consider a bicriteria shortest path routing (bSPR) model dedicated to calculating nondominated paths for (1) the minimum total cost and (2) the minimum transmission delay. To solve this bSPR problem, we propose a new multiobjective genetic algorithm (moGA): (1) an efficient chromosome representation using the priority-based encoding method; (2) a new operator of GA parameters auto-tuning, which is adaptively regulation of exploration and exploitation based on the change of the average fitness of parents and offspring which is occurred at each generation; and (3) an interactive adaptive-weight fitness assignment mechanism is implemented that assigns weights to each objective and combines the weighted objectives into a single objective function. Numerical experiments with various scales of network design problems show the effectiveness and the efficiency of our approach by comparing with the recent researches.
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
NASA Technical Reports Server (NTRS)
Baluja, Shumeet
1995-01-01
This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem
Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...
2015-10-02
The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; ...
2017-07-20
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less
The accurate particle tracer code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yulei; Liu, Jian; Qin, Hong
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less
Simor, Péter; Zavecz, Zsófia; Pálosi, Vivien; Török, Csenge; Köteles, Ferenc
2015-02-01
A great body of research indicates that eveningness is associated with negative psychological outcomes, including depressive and anxiety symptoms, behavioral dyscontrol and different health impairing behaviors. Impaired subjective sleep quality, increased circadian misalignment and daytime sleepiness were also reported in evening-type individuals in comparison with morning-types. Although sleep problems were consistently reported to be associated with poor psychological functioning, the effects of sleep disruption on the relationship between eveningness preference and negative emotionality have scarcely been investigated. Here, based on questionnaire data of 756 individuals (25.5% males, age range = 18-43 years, mean = 25.3 ± 5.8 years), as well as of the evening-type (N = 211) and morning-type (N = 189) subgroups, we examined the relationship among sleep problems, eveningness and negative emotionality. Subjects completed the Hungarian Version of the Horne and Östberg Morningness-Eveningness Questionnaire (MEQ-14), The Athen Insomnia Scale (AIS) and the Epworth Sleepiness Scale (ESS). Moreover, a composite score of Negative Emotionality (NE) was computed based on the scores of the Short Beck Depression Inventory (BDI-9), the Perceived Stress Scale (PSS-4) and the General Health Questionnaire (GHQ-12). Morning and evening circadian misalignment was calculated based on the difference between preferred and real wake- and bedtimes. Two possible models were tested, hypothesizing that sleep problems (circadian misalignment, insomniac symptoms and daytime sleepiness) moderate or mediate the association between eveningness and negative emotionality. Eveningness preference was correlated with increased NE and increased AIS, ESS and circadian misalignment scores. Our results indicate that eveningness-preference is an independent risk factor for higher negative emotionality regardless of the effects of age, gender, circadian misalignment and sleep complaints. Nevertheless, while chronotype explained ∼6%, sleep problems (AIS and ESS) accounted for a much larger proportion (∼28%) of the variance of NE. We did not find a significant effect of interaction (moderation) between chronotype and sleep problems. In contrast, insomniac symptoms (AIS) emerged as a partial mediator between chronotype and NE. These findings argue against the assumption that indicators of mental health problems in evening-type individuals can be explained exclusively on the basis of disturbed sleep. Nevertheless, negative psychological outcomes seem to be partially attributable to increased severity of insomniac complaints in evening-types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popolo, A. Del; Delliou, M. Le, E-mail: adelpopolo@oact.inaf.it, E-mail: delliou@ift.unesp.br
2014-12-01
We continue the study of the impact of baryon physics on the small scale problems of the ΛCDM model, based on a semi-analytical model (Del Popolo, 2009). With such model, we show how the cusp/core, missing satellite (MSP), Too Big to Fail (TBTF) problems and the angular momentum catastrophe can be reconciled with observations, adding parent-satellite interaction. Such interaction between dark matter (DM) and baryons through dynamical friction (DF) can sufficiently flatten the inner cusp of the density profiles to solve the cusp/core problem. Combining, in our model, a Zolotov et al. (2012)-like correction, similarly to Brooks et al. (2013),more » and effects of UV heating and tidal stripping, the number of massive, luminous satellites, as seen in the Via Lactea 2 (VL2) subhaloes, is in agreement with the numbers observed in the MW, thus resolving the MSP and TBTF problems. The model also produces a distribution of the angular spin parameter and angular momentum in agreement with observations of the dwarfs studied by van den Bosch, Burkert, and Swaters (2001)« less
AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT: A GIS-BASED HYDROLOGIC MODELING TOOL
Planning and assessment in land and water resource management are evolving toward complex, spatially explicit regional assessments. These problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and temporal scales. The extens...
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Stephen M
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VImore » in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of using SCALE/KENO-VI for criticality analyses; the SCALE/KENO-VI manual provides information on the use of SCALE/KENO-VI and all its modules. The primer also contains an appendix with sample input files.« less
Undergraduate medical student's perceptions on traditional and problem based curricula: pilot study.
Meo, Sultan Ayoub
2014-07-01
To evaluate and compare students' perceptions about teaching and learning, knowledge and skills, outcomes of course materials and their satisfaction in traditional Lecture Based learning versus Problem-Based Learning curricula in two different medical schools. The comparative cross-sectional questionnaire-based study was conducted in the Department of Physiology, College of Medicine, King Saud University, Riyadh, Saudi Arabia, from July 2009 to January 2011. Two different undergraduate medical schools were selected; one followed the traditional curriculum, while the other followed the problem-based learning curriculum. Two equal groups of first year medical students were selected. They were taught in respiratory physiology and lung function lab according to their curriculum for a period of two weeks. At the completion of the study period, a five-point Likert scale was used to assess students' perceptions on satisfaction, academic environment, teaching and learning, knowledge and skills and outcomes of course materials about effectiveness of problem-based learning compared to traditional methods. SPSS 19 was used for statistical analysis. Students used to problem-based learning curriculum obtained marginally higher scores in their perceptions (24.10 +/- 3.63) compared to ones following the traditional curriculum (22.67 +/- 3.74). However, the difference in perceptions did not achieve a level of statistical significance. Students following problem-based learning curriculum have more positive perceptions on teaching and learning, knowledge and skills, outcomes of their course materials and satisfaction compared to the students belonging to the traditional style of medical school. However, the difference between the two groups was not statistically significant.
Signature detection and matching for document image retrieval.
Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan
2009-11-01
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
Toward information management in corporations (2)
NASA Astrophysics Data System (ADS)
Shibata, Mitsuru
If construction of inhouse information management systems in an advanced information society should be positioned along with the social information management, its base making begins with reviewing current paper filing systems. Since the problems which inhere in inhouse information management systems utilizing OA equipments also inhere in paper filing systems, the first step toward full scale inhouse information management should be to grasp and solve the fundamental problems in current filing systems. This paper describes analysis of fundamental problems in filing systems, making new type of offices and analysis of improvement needs in filing systems, and some points in improving filing systems.
Portable parallel portfolio optimization in the Aurora Financial Management System
NASA Astrophysics Data System (ADS)
Laure, Erwin; Moritsch, Hans
2001-07-01
Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.
Dimensions of problem gambling behavior associated with purchasing sports lottery.
Li, Hai; Mao, Luke Lunhua; Zhang, James J; Wu, Yin; Li, Anmin; Chen, Jing
2012-03-01
The purpose of this study was to identify and examine the dimensions of problem gambling behaviors associated with purchasing sports lottery in China. This was accomplished through the development and validation of the Scale of Assessing Problem Gambling (SAPG). The SAPG was initially developed through a comprehensive qualitative research process. Research participants (N = 4,982) were Chinese residents who had purchased sports lottery tickets, who responded to a survey packet, representing a response rate of 91.4%. Data were split into two halves, one for conducting an EFA and the other for a CFA. A five-factor model with 19 items (Social Consequence, Financial Consequence, Harmful Behavior, Compulsive Disorder, and Depression Sign) showed good measurement properties to assess problem gambling of sports lottery consumers in China, including good fit to the data (RMSEA = 0.050, TLI = 0.978, and CFI = 0.922), convergent and discriminate validity, and reliability. Regression analyses revealed that except for Depression Sign, the SAPG factors were significantly (P < 0.05) predictive of purchase behaviors of sports lottery. This study represents an initial effort to understand the dimensions of problem gambling associated with Chinese sports lottery. The developed scale may be adopted by researchers and practitioners to examine problem gambling behaviors and develop effective prevention and intervention procedures based on tangible evidence.
NASA Astrophysics Data System (ADS)
Kirchhoff, C.; Dilling, L.
2011-12-01
Water managers have long experienced the challenges of managing water resources in a variable climate. However, climate change has the potential to reshape the experiential landscape by, for example, increasing the intensity and duration of droughts, shifting precipitation timing and amounts, and changing sea levels. Given the uncertainty in evaluating potential climate risks as well as future water availability and water demands, scholars suggest water managers employ more flexible and adaptive science-based management to manage uncertainty (NRC 2009). While such an approach is appropriate, for adaptive science-based management to be effective both governance and information must be concordant across three measures: fit, interplay and scale (Young 2002)(Note 1). Our research relies on interviews of state water managers and related experts (n=50) and documentary analysis in five U.S. states to understand the drivers and constraints to improving water resource planning and decision-making in a changing climate using an assessment of fit, interplay and scale as an evaluative framework. We apply this framework to assess and compare how water managers plan and respond to current or anticipated water resource challenges within each state. We hypothesize that better alignment between the data and management framework and the water resource problem improves water managers' facility to understand (via available, relevant, timely information) and respond appropriately (through institutional response mechanisms). In addition, better alignment between governance mechanisms (between the scope of the problem and identified appropriate responses) improves water management. Moreover, because many of the management challenges analyzed in this study concern present day issues with scarcity brought on by a combination of growth and drought, better alignment of fit, interplay, and scale today will enable and prepare water managers to be more successful in adapting to climate change impacts in the long-term. Note 1: For the purposes of this research, the problem of fit deals with the level of concordance between the natural and human systems while interplay involves how institutional arrangements interact both horizontally and vertically. Lastly, scale considers both spatial and temporal alignment of the physical systems and management structure. For example, to manage water resources effectively in a changing climate suggests having information that informs short-term and long-term changes and having institutional arrangements that seek understanding across temporal scales and facilitate responses based on information available (Young 2002).
Full-scale Dynamic Testing of Soft-Story Retrofitted and Un-Retrofitted Woodframe Buildings
John W. van de Lindt; George T. Abell; Pouria Bahmani; Mikhail Gershfeld; Xiaoyun Shao; Weichiang Pang; Michael D. Symans; Ershad Ziaei; Steven E. Pryor; Douglas Rammer; Jingjing Tian
2013-01-01
The existence of thousands of soft-story woodframe buildings in California has been recognized as a disaster preparedness problem with concerted mitigation efforts underway in many cities throughout the state. The vast majority of those efforts are based on numerical modeling, often with half-century old data in which assumptions have to be made based on best...
ERIC Educational Resources Information Center
Mayr, Toni; Ulich, Michaela
2009-01-01
Compared with the traditional focus on developmental problems, research on positive development is relatively new. Empirical research in children's well-being has been scarce. The aim of this study was to develop a theoretically and empirically based instrument for practitioners to observe and assess preschool children's well-being in early…
McConaughy, Stephanie H; Ivanova, Masha Y; Antshel, Kevin; Eiraldi, Ricardo B; Dumenci, Levent
2009-07-01
Trained classroom observers used the Direct Observation Form (DOF; McConaughy & Achenbach, 2009) to rate observations of 163 6- to 11-year-old children in their school classrooms. Participants were assigned to four groups based on a parent diagnostic interview and parent and teacher rating scales: Attention Deficit Hyperactivity Disorder (ADHD)-Combined type (n = 64); ADHD-Inattentive type (n = 22); clinically referred without ADHD (n = 51); and nonreferred control children (n = 26). The ADHD-Combined group scored significantly higher than the referred without ADHD group and controls on the DOF Intrusive and Oppositional syndromes, Attention Deficit Hyperactivity Problems scale, Hyperactivity-Impulsivity subscale, and Total Problems; and significantly lower on the DOF On-Task score. The ADHD-Inattentive group scored significantly higher than controls on the DOF Sluggish Cognitive Tempo and Attention Problems syndromes, Inattention subscale, and Total Problems; and significantly lower on the DOF On-Task score. Implications are discussed regarding the discriminative validity of standardized classroom observations for identifying children with ADHD and differentiating between the two ADHD subtypes.
McConaughy, Stephanie H.; Ivanova, Masha Y.; Antshel, Kevin; Eiraldi, Ricardo B.; Dumenci, Levent
2010-01-01
Trained classroom observers used the Direct Observation Form (DOF; McConaughy & Achenbach, 2009) to rate observations of 163 6- to 11-year-old children in their school classrooms. Participants were assigned to four groups based on a parent diagnostic interview and parent and teacher rating scales: Attention Deficit Hyperactivity Disorder (ADHD)—Combined type (n = 64); ADHD—Inattentive type (n = 22); clinically referred without ADHD (n = 51); and nonreferred control children (n = 26). The ADHD—Combined group scored significantly higher than the referred without ADHD group and controls on the DOF Intrusive and Oppositional syndromes, Attention Deficit Hyperactivity Problems scale, Hyperactivity-Impulsivity subscale, and Total Problems; and significantly lower on the DOF On-Task score. The ADHD—Inattentive group scored significantly higher than controls on the DOF Sluggish Cognitive Tempo and Attention Problems syndromes, Inattention subscale, and Total Problems; and significantly lower on the DOF On-Task score. Implications are discussed regarding the discriminative validity of standardized classroom observations for identifying children with ADHD and differentiating between the two ADHD subtypes. PMID:20802813
NASA Astrophysics Data System (ADS)
Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng
2018-04-01
One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
Adding intelligence to scientific data management
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas M., Jr.; Treinish, Lloyd A.
1989-01-01
NASA plans to solve some of the problems of handling large-scale scientific data bases by turning to artificial intelligence (AI) are discussed. The growth of the information glut and the ways that AI can help alleviate the resulting problems are reviewed. The employment of the Intelligent User Interface prototype, where the user will generate his own natural language query with the assistance of the system, is examined. Spatial data management, scientific data visualization, and data fusion are discussed.
Robust large-scale parallel nonlinear solvers for simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less
NASA Astrophysics Data System (ADS)
Kalghatgi, Suparna Kishore
Real-world surfaces typically have geometric features at a range of spatial scales. At the microscale, opaque surfaces are often characterized by bidirectional reflectance distribution functions (BRDF), which describes how a surface scatters incident light. At the mesoscale, surfaces often exhibit visible texture -- stochastic or patterned arrangements of geometric features that provide visual information about surface properties such as roughness, smoothness, softness, etc. These textures also affect how light is scattered by the surface, but the effects are at a different spatial scale than those captured by the BRDF. Through this research, we investigate how microscale and mesoscale surface properties interact to contribute to overall surface appearance. This behavior is also the cause of the well-known "touch-up problem" in the paint industry, where two regions coated with exactly the same paint, look different in color, gloss and/or texture because of differences in application methods. At first, samples were created by applying latex paint to standard wallboard surfaces. Two application methods- spraying and rolling were used. The BRDF and texture properties of the samples were measured, which revealed differences at both the microscale and mesoscale. This data was then used as input for a physically-based image synthesis algorithm, to generate realistic images of the surfaces under different viewing conditions. In order to understand the factors that govern touch-up visibility, psychophysical tests were conducted using calibrated, digital photographs of the samples as stimuli. Images were presented in pairs and a two alternative forced choice design was used for the experiments. These judgments were then used as data for a Thurstonian scaling analysis to produce psychophysical scales of visibility, which helped determine the effect of paint formulation, application methods, and viewing and illumination conditions on the touch-up problem. The results can be used as base data towards development of a psychophysical model that relates physical differences in paint formulation and application methods to visual differences in surface appearance.
Anestis, Joye C; Gottfried, Emily D; Joiner, Thomas E
2015-02-01
This study examined the utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) substantive scales in the prediction of premature termination and therapy no-shows while controlling for other relevant predictors in a university-based community mental health center, a sample at high risk of both premature termination and no-show appointments. Participants included 457 individuals seeking services from a university-based psychology clinic. Results indicated that Juvenile Conduct Problems (JCP) predicted premature termination and Behavioral/Externalizing Dysfunction and JCP predicted number of no-shows, when accounting for initial severity of illness, personality disorder diagnosis, therapist experience, and other related MMPI-2-RF scales. The MMPI-2-RF Aesthetic-Literary Interests scale also predicted number of no-shows. Recommendations for applying these findings in clinical practice are discussed. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei
2018-06-01
Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
The planes of satellite galaxies problem, suggested solutions, and open questions
NASA Astrophysics Data System (ADS)
Pawlowski, Marcel S.
2018-02-01
Satellite galaxies of the Milky Way and of the Andromeda galaxy have been found to preferentially align in significantly flattened planes of satellite galaxies, and available velocity measurements are indicative of a preference of satellites in those structures to co-orbit. There is an increasing evidence that such kinematically correlated satellite planes are also present around more distant hosts. Detailed comparisons show that similarly anisotropic phase-space distributions of sub-halos are exceedingly rare in cosmological simulations based on the ΛCDM paradigm. Analogs to the observed systems have frequencies of ≤ 0.5% in such simulations. In contrast to other small-scale problems, the satellite planes issue is not strongly affected by baryonic processes because the distribution of sub-halos on scales of hundreds of kpc is dominated by gravitational effects. This makes the satellite planes one of the most serious small-scale problems for ΛCDM. This review summarizes the observational evidence for planes of satellite galaxies in the Local Group and beyond, and provides an overview of how they compare to cosmological simulations. It also discusses scenarios which aim at explaining the coherence of satellite positions and orbits, and why they all are currently unable to satisfactorily resolve the issue.
[Continuity and discontinuity of the geomerida: the bionomic and biotic aspects].
Kafanov, A I
2005-01-01
The view of the spatial structure of the geomerida (Earth's life cover) as a continuum that prevails in modern phytocoenology is mostly determined by a physiognomic (landscape-bionomic) discrimination of vegetation components. In this connection, geography of life forms appears as subject of the landscapebionomic biogeography. In zoocoenology there is a tendency of synthesis of alternative concepts based on the assumption that there are no absolute continuum and absolute discontinuum in the organic nature. The problem of continuum and discontinuum of living cover being problem of scale aries from fractal structure of geomerida. This problem arises from fractal nature of the spatial structure of geomerida. The continuum mainly belongs to regularities of topological order. At regional and subregional scale the continuum of biochores is rather rare. The objective evidences of relative discontinuity of the living cover are determined by significant alterations of species diversity at the regional, subregional and even topological scale Alternatively to conventionally discriminated units in physionomically continuous vegetation, the same biotic complexes, represented as operational units of biogeographical and biocenological zoning, are distinguished repeatedly and independently by different researchers. An area occupied by certain flora (fauna, biota) could be considered as elementary unit of biotic diversity (elementary biotic complex).
Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.
Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less
Tethys – A Python Package for Spatial and Temporal Downscaling of Global Water Withdrawals
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...
2018-02-09
Downscaling of water withdrawals from regional/national to local scale is a fundamental step and also a common problem when integrating large scale economic and integrated assessment models with high-resolution detailed sectoral models. Tethys, an open-access software written in Python, is developed with statistical downscaling algorithms, to spatially and temporally downscale water withdrawal data to a finer scale. The spatial resolution will be downscaled from region/basin scale to grid (0.5 geographic degree) scale and the temporal resolution will be downscaled from year to month. Tethys is used to produce monthly global gridded water withdrawal products based on estimates from the Globalmore » Change Assessment Model (GCAM).« less
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales
NASA Astrophysics Data System (ADS)
Karsznia, Izabela
2014-12-01
The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.
On a Game of Large-Scale Projects Competition
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2009-09-01
The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em
2018-02-01
Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
Health problems among low-income parents in the aftermath of Hurricane Katrina.
Lowe, Sarah R; Willis, Margaret; Rhodes, Jean E
2014-08-01
Although the mental health consequences of disasters have been well documented, relatively less is known about their effects on survivors' physical health. Disaster studies have also generally lacked predisaster data, limiting researchers' ability to determine whether postdisaster physical health problems were influenced by disaster exposure, or whether they would have emerged even if the disaster had not occurred. The current study aimed to fill this gap. Participants were low-income, primarily non-Hispanic Black mothers (N = 334) who survived Hurricane Katrina and completed 4 survey assessments, 2 predisaster and 2 postdisaster. In each assessment, participants reported on whether they had experienced 3 common health problems (frequent headaches or migraines, back problems, and digestive problems) and completed 2 mental health measure (the K6 scale, the Perceived Stress Scale). The descriptive results suggested that the hurricane led to at least short-term increases in the 3 health outcomes. Fixed effects modeling was conducted to explore how changes in various predictor variables related to changes in each health condition over the study. Bereavement and increases in psychological distress were significant predictors of increases in health problems. Based on these results, further research that explores the processes through which disasters lead to both physical and mental health problems, postdisaster screenings for common health conditions and psychological distress, and interventions that boost survivors' stress management skills are suggested.
Health Problems Among Low-Income Parents in the Aftermath of Hurricane Katrina
Lowe, Sarah R.; Willis, Margaret; Rhodes, Jean E.
2014-01-01
Objective Although the mental health consequences of disasters have been well documented, relatively less is known about their effects on survivors’ physical health. Disaster studies have also generally lacked predisaster data, limiting researchers’ ability to determine whether postdisaster physical health problems were influenced by disaster exposure, or whether they would have emerged even if the disaster had not occurred. The current study aimed to fill this gap. Method Participants were low-income, primarily non-Hispanic Black mothers (N = 334) who survived Hurricane Katrina and completed 4 survey assessments, 2 predisaster and 2 postdisaster. In each assessment, participants reported on whether they had experienced 3 common health problems (frequent headaches or migraines, back problems, and digestive problems) and completed 2 mental health measure (the K6 scale, the Perceived Stress Scale). Results The descriptive results suggested that the hurricane led to at least short-term increases in the 3 health outcomes. Fixed effects modeling was conducted to explore how changes in various predictor variables related to changes in each health condition over the study. Bereavement and increases in psychological distress were significant predictors of increases in health problems. Conclusions Based on these results, further research that explores the processes through which disasters lead to both physical and mental health problems, postdisaster screenings for common health conditions and psychological distress, and interventions that boost survivors’ stress management skills are suggested. PMID:24295026
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
Wrinkle-free design of thin membrane structures using stress-based topology optimization
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-05-01
Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Ogawa, Takeshi; Aihara, Takatsugu; Shimokawa, Takeaki; Yamashita, Okito
2018-04-24
Creative insight occurs with an "Aha!" experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21-69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.
Rescorla, Leslie A; Achenbach, Thomas M; Ivanova, Masha Y; Harder, Valerie S; Otten, Laura; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S W; Dias, Pedro; Dobrean, Anca; Döpfner, Manfred; Duyme, Michel; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Fung, Daniel S S; Gonçalves, Miguel; Guðmundsson, Halldór; Jeng, Suh-Fang; Jusiené, Roma; Ah Kim, Young; Kristensen, Solvejg; Liu, Jianghong; Lecannelier, Felipe; Leung, Patrick W L; Machado, Bárbara César; Montirosso, Rosario; Ja Oh, Kyung; Ooi, Yoon Phaik; Plück, Julia; Pomalima, Rolando; Pranvera, Jetishi; Schmeck, Klaus; Shahini, Mimoza; Silva, Jaime R; Simsek, Zeynep; Sourander, Andre; Valverde, José; van der Ende, Jan; Van Leeuwen, Karla G; Wu, Yen-Tzu; Yurdusen, Sema; Zubrick, Stephen R; Verhulst, Frank C
2011-01-01
International comparisons were conducted of preschool children's behavioral and emotional problems as reported on the Child Behavior Checklist for Ages 1½-5 by parents in 24 societies (N = 19,850). Item ratings were aggregated into scores on syndromes; Diagnostic and Statistical Manual of Mental Disorders-oriented scales; a Stress Problems scale; and Internalizing, Externalizing, and Total Problems scales. Effect sizes for scale score differences among the 24 societies ranged from small to medium (3-12%). Although societies differed greatly in language, culture, and other characteristics, Total Problems scores for 18 of the 24 societies were within 7.1 points of the omnicultural mean of 33.3 (on a scale of 0-198). Gender and age differences, as well as gender and age interactions with society, were all very small (effect sizes < 1%). Across all pairs of societies, correlations between mean item ratings averaged .78, and correlations between internal consistency alphas for the scales averaged .92, indicating that the rank orders of mean item ratings and internal consistencies of scales were very similar across diverse societies.
Rescorla, Leslie A.; Achenbach, Thomas M.; Ivanova, Masha Y.; Harder, Valerie S.; Otten, Laura; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S. W.; Dias, Pedro; Dobrean, Anca; Döpfner, Manfred; Duyme, Michel; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Fung, Daniel S. S.; Gonçalves, Miguel; Guđmundsson, Halldór; Jeng, Suh-Fang; Jusiené, Roma; Kim, Young Ah; Kristensen, Solvejg; Liu, Jianghong; Lecannelier, Felipe; Leung, Patrick W. L.; Machado, Bárbara César; Montirosso, Rosario; Oh, Kyung Ja; Ooi, Yoon Phaik; Plück, Julia; Pomalima, Rolando; Pranvera, Jetishi; Schmeck, Klaus; Shahini, Mimoza; Silva, Jaime R.; Simsek, Zeynep; Sourander, Andre; Valverde, José; van der Ende, Jan; Van Leeuwen, Karla G.; Wu, Yen-Tzu; Yurdusen, Sema; Zubrick, Stephen R.; Verhulst, Frank C.
2014-01-01
International comparisons were conducted of preschool children’s behavioral and emotional problems as reported on the Child Behavior Checklist for Ages 1½–5 by parents in 24 societies (N =19,850). Item ratings were aggregated into scores on syndromes; Diagnostic and Statistical Manual of Mental Disorders–oriented scales; a Stress Problems scale; and Internalizing, Externalizing, and Total Problems scales. Effect sizes for scale score differences among the 24 societies ranged from small to medium (3–12%). Although societies differed greatly in language, culture, and other characteristics, Total Problems scores for 18 of the 24 societies were within 7.1 points of the omnicultural mean of 33.3 (on a scale of 0–198). Gender and age differences, as well as gender and age interactions with society, were all very small (effect sizes <1%). Across all pairs of societies, correlations between mean item ratings averaged .78, and correlations between internal consistency alphas for the scales averaged .92, indicating that the rank orders of mean item ratings and internal consistencies of scales were very similar across diverse societies. PMID:21534056
Chiang, Michael F; Starren, Justin B
2002-01-01
The successful implementation of clinical information systems is difficult. In examining the reasons and potential solutions for this problem, the medical informatics community may benefit from the lessons of a rich body of software engineering and management literature about the failure of software projects. Based on previous studies, we present a conceptual framework for understanding the risk factors associated with large-scale projects. However, the vast majority of existing literature is based on large, enterprise-wide systems, and it unclear whether those results may be scaled down and applied to smaller projects such as departmental medical information systems. To examine this issue, we discuss the case study of a delayed electronic medical record implementation project in a small specialty practice at Columbia-Presbyterian Medical Center. While the factors contributing to the delay of this small project share some attributes with those found in larger organizations, there are important differences. The significance of these differences for groups implementing small medical information systems is discussed.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
Hall, Brian J.; Puffer, Eve; Murray, Laura K.; Ismael, Abdulkadir; Bass, Judith K.; Sim, Amanda; Bolton, Paul A.
2014-01-01
Assessing mental health problems cross-culturally for children exposed to war and violence presents a number of unique challenges. One of the most important issues is the lack of validated symptom measures to assess these problems. The present study sought to evaluate the psychometric properties of two measures to assess mental health problems: the Achenbach Youth Self-Report and the Child Posttraumatic Stress Disorder Symptom Scale. We conducted a validity study in three refugee camps in Eastern Ethiopia in the outskirts of Jijiga, the capital of the Somali region. A total of 147 child and caregiver pairs were assessed, and scores obtained were submitted to rigorous psychometric evaluation. Excellent internal consistency reliability was obtained for symptom measures for children and their caregivers. Validation of study instruments based on local case definitions was obtained for the caregivers but not consistently for the children. Sensitivity and specificity of study measures were generally low, indicating that these scales would not perform adequately as screening instruments. Combined test-retest and inter-rater reliability was low for all scales. This study illustrates the need for validation and testing of existing measures cross-culturally. Methodological implications for future cross-cultural research studies in low- and middle-income countries are discussed. PMID:24955147
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits
NASA Astrophysics Data System (ADS)
Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas
2018-04-01
Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.
NASA Astrophysics Data System (ADS)
Ijjas, Anna; Steinhardt, Paul J.
2015-10-01
We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
NASA Astrophysics Data System (ADS)
Hussain, Nur Farahin Mee; Zahid, Zalina
2014-12-01
Nowadays, in the job market demand, graduates are expected not only to have higher performance in academic but they must also be excellent in soft skill. Problem-Based Learning (PBL) has a number of distinct advantages as a learning method as it can deliver graduates that will be highly prized by industry. This study attempts to determine the satisfaction level of engineering students on the PBL Approach and to evaluate their determinant factors. The Structural Equation Modeling (SEM) was used to investigate how the factors of Good Teaching Scale, Clear Goals, Student Assessment and Levels of Workload affected the student satisfaction towards PBL approach.
Applications of Support Vector Machines In Chemo And Bioinformatics
NASA Astrophysics Data System (ADS)
Jayaraman, V. K.; Sundararajan, V.
2010-10-01
Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.
Cosmological signatures of a UV-conformal standard model.
Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel
2014-09-19
Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.
ASDF: A New Adaptable Data Format for Seismology Suitable for Large-Scale Workflows
NASA Astrophysics Data System (ADS)
Krischer, L.; Smith, J. A.; Spinuso, A.; Tromp, J.
2014-12-01
Increases in the amounts of available data as well as computational power opens the possibility to tackle ever larger and more complex problems. This comes with a slew of new problems, two of which are the need for a more efficient use of available resources and a sensible organization and storage of the data. Both need to be satisfied in order to properly scale a problem and both are frequent bottlenecks in large seismic inversions using ambient noise or more traditional techniques.We present recent developments and ideas regarding a new data format, named ASDF (Adaptable Seismic Data Format), for all branches of seismology aiding with the aforementioned problems. The key idea is to store all information necessary to fully understand a set of data in a single file. This enables the construction of self-explaining and exchangeable data sets facilitating collaboration on large-scale problems. We incorporate the existing metadata standards FDSN StationXML and QuakeML together with waveform and auxiliary data into a common container based on the HDF5 standard. A further critical component of the format is the storage of provenance information as an extension of W3C PROV, meaning information about the history of the data, assisting with the general problem of reproducibility.Applications of the proposed new format are numerous. In the context of seismic tomography it enables the full description and storage of synthetic waveforms including information about the used model, the solver, the parameters, and other variables that influenced the final waveforms. Furthermore, intermediate products like adjoint sources, cross correlations, and receiver functions can be described and most importantly exchanged with others.Usability and tool support is crucial for any new format to gain acceptance and we additionally present a fully functional implementation of this format based on Python and ObsPy. It offers a convenient way to discover and analyze data sets as well as making it trivial to execute processing functionality on modern high performance machines utilizing parallel I/O even for users not familiar with the details. An open-source development and design model as well as a wiki aim to involve the community.
NASA Astrophysics Data System (ADS)
Ali-Akbari, H. R.; Ceballes, S.; Abdelkefi, A.
2017-10-01
A nonlocal continuum-based model is derived to simulate the dynamic behavior of bridged carbon nanotube-based nano-scale mass detectors. The carbon nanotube (CNT) is modeled as an elastic Euler-Bernoulli beam considering von-Kármán type geometric nonlinearity. In order to achieve better accuracy in characterization of the CNTs, the geometrical properties of an attached nano-scale particle are introduced into the model by its moment of inertia with respect to the central axis of the beam. The inter-atomic long-range interactions within the structure of the CNT are incorporated into the model using Eringen's nonlocal elastic field theory. In this model, the mass can be deposited along an arbitrary length of the CNT. After deriving the full nonlinear equations of motion, the natural frequencies and corresponding mode shapes are extracted based on a linear eigenvalue problem analysis. The results show that the geometry of the attached particle has a significant impact on the dynamic behavior of the CNT-based mechanical resonator, especially, for those with small aspect ratios. The developed model and analysis are beneficial for nano-scale mass identification when a CNT-based mechanical resonator is utilized as a small-scale bio-mass sensor and the deposited particles are those, such as proteins, enzymes, cancer cells, DNA and other nano-scale biological objects with different and complex shapes.
Web-Based Virtual Laboratory for Food Analysis Course
NASA Astrophysics Data System (ADS)
Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.
2018-02-01
Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman
2017-07-27
Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for professional confidence was 0.732, therapeutic optimism/pessimism was 0.638, and generalist perspective was 0.769. The R-DAQ was developed with a predominantly UK-based sample of health professionals. This study indicates that this scale functions adequately and provides a valid measure of depression attitudes for medical practitioners in Pakistan, with the same factor structure as in the scale development sample. However, optimal scale function necessitated removal of several items, with a 15-item scale enabling the most parsimonious factor solution for this population.
A homogenization-based quasi-discrete method for the fracture of heterogeneous materials
NASA Astrophysics Data System (ADS)
Berke, P. Z.; Peerlings, R. H. J.; Massart, T. J.; Geers, M. G. D.
2014-05-01
The understanding and the prediction of the failure behaviour of materials with pronounced microstructural effects is of crucial importance. This paper presents a novel computational methodology for the handling of fracture on the basis of the microscale behaviour. The basic principles presented here allow the incorporation of an adaptive discretization scheme of the structure as a function of the evolution of strain localization in the underlying microstructure. The proposed quasi-discrete methodology bridges two scales: the scale of the material microstructure, modelled with a continuum type description; and the structural scale, where a discrete description of the material is adopted. The damaging material at the structural scale is divided into unit volumes, called cells, which are represented as a discrete network of points. The scale transition is inspired by computational homogenization techniques; however it does not rely on classical averaging theorems. The structural discrete equilibrium problem is formulated in terms of the underlying fine scale computations. Particular boundary conditions are developed on the scale of the material microstructure to address damage localization problems. The performance of this quasi-discrete method with the enhanced boundary conditions is assessed using different computational test cases. The predictions of the quasi-discrete scheme agree well with reference solutions obtained through direct numerical simulations, both in terms of crack patterns and load versus displacement responses.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Singularities in Free Surface Flows
NASA Astrophysics Data System (ADS)
Thete, Sumeet Suresh
Free surface flows where the shape of the interface separating two or more phases or liquids are unknown apriori, are commonplace in industrial applications and nature. Distribution of drop sizes, coalescence rate of drops, and the behavior of thin liquid films are crucial to understanding and enhancing industrial practices such as ink-jet printing, spraying, separations of chemicals, and coating flows. When a contiguous mass of liquid such as a drop, filament or a film undergoes breakup to give rise to multiple masses, the topological transition is accompanied with a finite-time singularity . Such singularity also arises when two or more masses of liquid merge into each other or coalesce. Thus the dynamics close to singularity determines the fate of about-to-form drops or films and applications they are involved in, and therefore needs to be analyzed precisely. The primary goal of this thesis is to resolve and analyze the dynamics close to singularity when free surface flows experience a topological transition, using a combination of theory, experiments, and numerical simulations. The first problem under consideration focuses on the dynamics following flow shut-off in bottle filling applications that are relevant to pharmaceutical and consumer products industry, using numerical techniques based on Galerkin Finite Element Methods (GFEM). The second problem addresses the dual flow behavior of aqueous foams that are observed in oil and gas fields and estimates the relevant parameters that describe such flows through a series of experiments. The third problem aims at understanding the drop formation of Newtonian and Carreau fluids, computationally using GFEM. The drops are formed as a result of imposed flow rates or expanding bubbles similar to those of piezo actuated and thermal ink-jet nozzles. The focus of fourth problem is on the evolution of thinning threads of Newtonian fluids and suspensions towards singularity, using computations based on GFEM and experimental techniques. The aim of fifth problem is to analyze the coalescence dynamics of drops through a combination of GFEM and scaling theory. Lastly, the sixth problem concerns the thinning and rupture dynamics of thin films of Newtonian and power-law fluids using scaling theory based on asymptotic analysis and the predictions of this theory are corroborated using computations based on GFEM.
Model and controller reduction of large-scale structures based on projection methods
NASA Astrophysics Data System (ADS)
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.
Hanisch, Charlotte; Freund-Braier, Inez; Hautmann, Christopher; Jänen, Nicola; Plück, Julia; Brix, Gabriele; Eichelberger, Ilka; Döpfner, Manfred
2010-01-01
Behavioural parent training is effective in improving child disruptive behavioural problems in preschool children by increasing parenting competence. The indicated Prevention Programme for Externalizing Problem behaviour (PEP) is a group training programme for parents and kindergarten teachers of children aged 3-6 years with externalizing behavioural problems. To evaluate the effects of PEP on child problem behaviour, parenting practices, parent-child interactions, and parental quality of life. Parents and kindergarten teachers of 155 children were randomly assigned to an intervention group (n = 91) and a nontreated control group (n = 64). They rated children's problem behaviour before and after PEP training; parents also reported on their parenting practices and quality of life. Standardized play situations were video-taped and rated for parent-child interactions, e.g. parental warmth. In the intention to treat analysis, mothers of the intervention group described less disruptive child behaviour and better parenting strategies, and showed more parental warmth during a standardized parent-child interaction. Dosage analyses confirmed these results for parents who attended at least five training sessions. Children were also rated to show less behaviour problems by their kindergarten teachers. Training effects were especially positive for parents who attended at least half of the training sessions. CBCL: Child Behaviour Checklist; CII: Coder Impressions Inventory; DASS: Depression anxiety Stress Scale; HSQ: Home-situation Questionnaire; LSS: Life Satisfaction Scale; OBDT: observed behaviour during the test; PCL: Problem Checklist; PEP: prevention programme for externalizing problem behaviour; PPC: Parent Problem Checklist; PPS: Parent Practices Scale; PS: Parenting Scale; PSBC: Problem Setting and Behaviour checklist; QJPS: Questionnaire on Judging Parental Strains; SEFS: Self-Efficacy Scale; SSC: Social Support Scale; TRF: Caregiver-Teacher Report Form.
Goulardins, Juliana B; Rigoli, Daniela; Loh, Pek Ru; Kane, Robert; Licari, Melissa; Hands, Beth; Oliveira, Jorge A; Piek, Jan
2018-06-01
This study investigated the relationship between motor performance; attentional, hyperactive, and impulsive symptoms; and social problems. Correlations between parents' versus teachers' ratings of social problems and ADHD symptomatology were also examined. A total of 129 children aged 9 to 12 years were included. ADHD symptoms and social problems were identified based on Conners' Rating Scales-Revised: L, and the McCarron Assessment of Neuromuscular Development was used to assess motor skills. After controlling for ADHD symptomatology, motor skills remained a significant predictor of social problems in the teacher model but not in the parent model. After controlling for motor skills, inattentive (not hyperactive-impulsive) symptoms were a significant predictor of social problems in the parent model, whereas hyperactive-impulsive (not inattentive) symptoms were a significant predictor of social problems in the teacher model. The findings suggested that intervention strategies should consider the interaction between symptoms and environmental contexts.
Quellmalz, Edys S; Pellegrino, James W
2009-01-02
Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.
[Autism Spectrum Disorder and DSM-5: Spectrum or Cluster?].
Kienle, Xaver; Freiberger, Verena; Greulich, Heide; Blank, Rainer
2015-01-01
Within the new DSM-5, the currently differentiated subgroups of "Autistic Disorder" (299.0), "Asperger's Disorder" (299.80) and "Pervasive Developmental Disorder" (299.80) are replaced by the more general "Autism Spectrum Disorder". With regard to a patient-oriented and expedient advising therapy planning, however, the issue of an empirically reproducible and clinically feasible differentiation into subgroups must still be raised. Based on two Autism-rating-scales (ASDS and FSK), an exploratory two-step cluster analysis was conducted with N=103 children (age: 5-18) seen in our social-pediatric health care centre to examine potentially autistic symptoms. In the two-cluster solution of both rating scales, mainly the problems in social communication grouped the children into a cluster "with communication problems" (51 % and 41 %), and a cluster "without communication problems". Within the three-cluster solution of the ASDS, sensory hypersensitivity, cleaving to routines and social-communicative problems generated an "autistic" subgroup (22%). The children of the second cluster ("communication problems", 35%) were only described by social-communicative problems, and the third group did not show any problems (38%). In the three-cluster solution of the FSK, the "autistic cluster" of the two-cluster solution differentiated in a subgroup with mainly social-communicative problems (cluster 1) and a second subgroup described by restrictive, repetitive behavior. The different cluster solutions will be discussed with a view to the new DSM-5 diagnostic criteria, for following studies a further specification of some of the ASDS and FSK items could be helpful.
NASA Technical Reports Server (NTRS)
Boyle, A. R.; Dangermond, J.; Marble, D.; Simonett, D. S.; Tomlinson, R. F.
1983-01-01
Problems and directions for large scale geographic information system development were reviewed and the general problems associated with automated geographic information systems and spatial data handling were addressed.
The social competence and behavioral problem substrate of new- and recent-onset childhood epilepsy.
Almane, Dace; Jones, Jana E; Jackson, Daren C; Seidenberg, Michael; Hermann, Bruce P
2014-02-01
This study examined patterns of syndrome-specific problems in behavior and competence in children with new- or recent-onset epilepsy compared with healthy controls. Research participants consisted of 205 children aged 8-18, including youth with recent-onset epilepsy (n=125, 64 localization-related epilepsy [LRE] and 61 idiopathic generalized epilepsy [IGE]) and healthy first-degree cousin controls (n=80). Parents completed the Child Behavior Checklist for children aged 6-18 (CBCL/6-18) from the Achenbach System of Empirically Based Assessment (ASEBA). Dependent variables included Total Competence, Total Problems, Total Internalizing, Total Externalizing, and Other Problems scales. Comparisons of children with LRE and IGE with healthy controls were examined followed by comparisons of healthy controls with those having specific epilepsy syndromes of LRE (BECTS, Frontal/Temporal Lobe, and Focal NOS) and IGE (Absence, Juvenile Myoclonic, and IGE NOS). Children with LRE and/or IGE differed significantly (p<0.05) from healthy controls, but did not differ from each other, across measures of behavior (Total Problems, Total Internalizing, Total Externalizing, and Other Problems including Thought and Attention Problems) or competence (Total Competence including School and Social). Similarly, children with specific syndromes of LRE and IGE differed significantly (p<0.05) from controls across measures of behavior (Total Problems, Total Internalizing, and Other Problems including Attention Problems) and competence (Total Competence including School). Only on the Thought Problems scale were there syndrome differences. In conclusion, children with recent-onset epilepsy present with significant behavioral problems and lower competence compared with controls, with little syndrome specificity whether defined broadly (LRE and IGE) or narrowly (specific syndromes of LRE and IGE). Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bai, Danyu
2015-08-01
This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.
Quantifying uncertainty and computational complexity for pore-scale simulations
NASA Astrophysics Data System (ADS)
Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.
2016-12-01
Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.
Modeling Framework for Fracture in Multiscale Cement-Based Material Structures
Qian, Zhiwei; Schlangen, Erik; Ye, Guang; van Breugel, Klaas
2017-01-01
Multiscale modeling for cement-based materials, such as concrete, is a relatively young subject, but there are already a number of different approaches to study different aspects of these classical materials. In this paper, the parameter-passing multiscale modeling scheme is established and applied to address the multiscale modeling problem for the integrated system of cement paste, mortar, and concrete. The block-by-block technique is employed to solve the length scale overlap challenge between the mortar level (0.1–10 mm) and the concrete level (1–40 mm). The microstructures of cement paste are simulated by the HYMOSTRUC3D model, and the material structures of mortar and concrete are simulated by the Anm material model. Afterwards the 3D lattice fracture model is used to evaluate their mechanical performance by simulating a uniaxial tensile test. The simulated output properties at a lower scale are passed to the next higher scale to serve as input local properties. A three-level multiscale lattice fracture analysis is demonstrated, including cement paste at the micrometer scale, mortar at the millimeter scale, and concrete at centimeter scale. PMID:28772948
Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin
2015-12-01
The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving
NASA Astrophysics Data System (ADS)
Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.
2018-01-01
The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.
Big Data Analytics with Datalog Queries on Spark.
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2016-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.
Big Data Analytics with Datalog Queries on Spark
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2017-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296
Allen, Stephanie L.; Duku, Eric; Vaillancourt, Tracy; Szatmari, Peter; Bryson, Susan; Fombonne, Eric; Volden, Joanne; Waddell, Charlotte; Zwaigenbaum, Lonnie; Roberts, Wendy; Mirenda, Pat; Bennett, Teresa; Elsabbagh, Mayada; Georgiades, Stelios
2015-01-01
Objective The factor structure and validity of the Behavioral Pediatrics Feeding Assessment Scale (BPFAS; Crist & Napier-Phillips, 2001) were examined in preschoolers with autism spectrum disorder (ASD). Methods Confirmatory factor analysis was used to examine the original BPFAS five-factor model, the fit of each latent variable, and a rival one-factor model. None of the models was adequate, thus a categorical exploratory factor analysis (CEFA) was conducted. Correlations were used to examine relations between the BPFAS and concurrent variables of interest. Results The CEFA identified an acceptable three-factor model. Correlational analyses indicated that feeding problems were positively related to parent-reported autism symptoms, behavior problems, sleep problems, and parenting stress, but largely unrelated to performance-based indices of autism symptom severity, language, and cognitive abilities, as well as child age. Conclusion These results provide evidence supporting the use of the identified BPFAS three-factor model for samples of young children with ASD. PMID:25725217
Natural Scherk-Schwarz theories of the weak scale
García, Isabel Garcia; Howe, Kiel; March-Russell, John
2015-12-01
Natural supersymmetric theories of the weak scale are under growing pressure given present LHC constraints, raising the question of whether untuned supersymmetric (SUSY) solutions to the hierarchy problem are possible. In this paper, we explore a class of 5-dimensional natural SUSY theories in which SUSY is broken by the Scherk-Schwarz mechanism. We pedagogically explain how Scherk-Schwarz elegantly solves the traditional problems of 4-dimensional SUSY theories (based on the MSSM and its many variants) that usually result in an unsettling level of fine-tuning. The minimal Scherk-Schwarz set up possesses novel phenomenology, which we briefly outline. In this study, we show thatmore » achieving the observed physical Higgs mass motivates extra structure that does not significantly affect the level of tuning (always better than ~10%) and we explore three qualitatively different extensions: the addition of extra matter that couples to the Higgs, an extra U(1)' gauge group under which the Higgs is charged and an NMSSM-like solution to the Higgs mass problem.« less
Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2017-11-01
The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.
Implementation of an effective hybrid GA for large-scale traveling salesman problems.
Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi
2007-02-01
This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.
Lenarda, P; Paggi, M
A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.
Determining erosion relevant soil characteristics with a small-scale rainfall simulator
NASA Astrophysics Data System (ADS)
Schindewolf, M.; Schmidt, J.
2009-04-01
The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.
A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
Performance of Grey Wolf Optimizer on large scale problems
NASA Astrophysics Data System (ADS)
Gupta, Shubham; Deep, Kusum
2017-01-01
For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.
Individual-level factors associated with mental health in Rwandan youth affected by HIV/AIDS.
Scorza, Pamela; Duarte, Cristiane S; Stevenson, Anne; Mushashi, Christine; Kanyanganzi, Fredrick; Munyana, Morris; Betancourt, Theresa S
2017-07-01
Prevention of mental disorders worldwide requires a greater understanding of protective processes associated with lower levels of mental health problems in children who face pervasive life stressors. This study aimed to identify culturally appropriate indicators of individual-level protective factors in Rwandan adolescents where risk factors, namely poverty and a history of trauma, have dramatically shaped youth mental health. The sample included 367 youth aged 10-17 in rural Rwanda. An earlier qualitative study of the same population identified the constructs "kwihangana" (patience/perseverance) and "kwigirira ikizere" (self-esteem) as capturing local perceptions of individual-level characteristics that helped reduce risks of mental health problems in youth. Nine items from the locally derived constructs were combined with 25 items from an existing scale that aligned well with local constructs-the Connor-Davidson Resilience Scale (CD-RISC). We assessed the factor structure of the CD-RISC expanded scale using exploratory factor analysis and determined the correlation of the expanded CD-RISC with depression and functional impairment. The CD-RISC expanded scale displayed high internal consistency (α = 0.93). Six factors emerged, which we labeled: perseverance, adaptability, strength/sociability, active engagement, self-assuredness, and sense of self-worth. Protective factor scale scores were significantly and inversely correlated with depression and functional impairment (r = -0.49 and r = - 0.38, respectively). An adapted scale displayed solid psychometric properties for measuring protective factors in Rwandan youth. Identifying culturally appropriate protective factors is a key component of research associated with the prevention of mental health problems and critical to the development of cross-cultural strength-based interventions for children and families.
Motion-based prediction is sufficient to solve the aperture problem
Perrinet, Laurent U; Masson, Guillaume S
2012-01-01
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489
Assimilation approach to measuring organizational change from pre- to post-intervention
Moore, Scott C; Osatuke, Katerine; Howe, Steven R
2014-01-01
AIM: To present a conceptual and measurement strategy that allows to objectively, sensitively evaluate intervention progress based on data of participants’ perceptions of presenting problems. METHODS: We used as an example an organization development intervention at a United States Veterans Affairs medical center. Within a year, the intervention addressed the hospital’s initially serious problems and multiple stakeholders (employees, management, union representatives) reported satisfaction with progress made. Traditional quantitative outcome measures, however, failed to capture the strong positive impact consistently reported by several types of stakeholders in qualitative interviews. To address the paradox, full interview data describing the medical center pre- and post- intervention were examined applying a validated theoretical framework from another discipline: Psychotherapy research. The Assimilation model is a clinical-developmental theory that describes empirically grounded change levels in problematic experiences, e.g., problems reported by participants. The model, measure Assimilation of Problematic Experiences Scale (APES), and rating procedure have been previously applied across various populations and problem types, mainly in clinical but also in non-clinical settings. We applied the APES to the transcribed qualitative data of intervention participants’ interviews, using the method closely replicating prior assimilation research (the process whereby trained clinicians familiar with the Assimilation model work with full, transcribed interview data to assign the APES ratings). The APES ratings summarized levels of progress which was defined as participants’ assimilation level of problematic experiences, and compared from pre- to post-intervention. RESULTS: The results were consistent with participants’ own reported perceptions of the intervention impact. Increase in APES levels from pre- to post-intervention suggested improvement, missed in the previous quantitative measures (the Maslach Burnout Inventory and the Work Environment Scale). The progress specifically consisted of participants’ moving from the APES stages where the problematic experience was avoided, to the APES stages where awareness and attention to the problems were steadily sustained, although the problems were not yet fully processed or resolved. These results explain why the conventional outcome measures failed to reflect the intervention progress; they narrowly defined progress as resolution of the presenting problems and alleviation of symptomatic distress. In the Assimilation model, this definition only applies to a sub-segment of the change continuum, specifically the latest APES stages. The model defines progress as change in psychological processes used in response to the problem, i.e., a growing ability to deal with problematic issues non-defensively, manifested differently depending on APES stages. At early stages, progress is an increased ability to face the problem rather than turning away. At later APES stages, progress involves naming, understanding and successfully addressing the problem. The assimilation approach provides a broader developmental context compared to exclusively symptom, problem-, or behavior- focused approaches that typically inform outcome measurement in interpersonally based interventions. In our data, this made the difference between reflecting (APES) vs missing (Maslach Burnout Inventory, Work Environment Scale) the pre-post change that was strongly perceived by the intervention recipients. CONCLUSION: The results illustrated a working solution to the challenge of objectively evaluating progress in subjectively experienced problems. This approach informs measuring change in psychologically based interventions. PMID:24660141
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Modal resonant dynamics of cables with a flexible support: A modulated diffraction problem
NASA Astrophysics Data System (ADS)
Guo, Tieding; Kang, Houjun; Wang, Lianhua; Liu, Qijian; Zhao, Yueyu
2018-06-01
Modal resonant dynamics of cables with a flexible support is defined as a modulated (wave) diffraction problem, and investigated by asymptotic expansions of the cable-support coupled system. The support-cable mass ratio, which is usually very large, turns out to be the key parameter for characterizing cable-support dynamic interactions. By treating the mass ratio's inverse as a small perturbation parameter and scaling the cable tension properly, both cable's modal resonant dynamics and the flexible support dynamics are asymptotically reduced by using multiple scale expansions, leading finally to a reduced cable-support coupled model (i.e., on a slow time scale). After numerical validations of the reduced coupled model, cable-support coupled responses and the flexible support induced coupling effects on the cable, are both fully investigated, based upon the reduced model. More explicitly, the dynamic effects on the cable's nonlinear frequency and force responses, caused by the support-cable mass ratio, the resonant detuning parameter and the support damping, are carefully evaluated.
Scale-Up: Improving Large Enrollment Physics Courses
NASA Astrophysics Data System (ADS)
Beichner, Robert
1999-11-01
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Agnisarman, Sruthy; Narasimha, Shraddhaa; Chalil Madathil, Kapil; Welch, Brandon; Brinda, Fnu; Ashok, Aparna; McElligott, James
2017-04-24
Telemedicine is the use of technology to provide and support health care when distance separates the clinical service and the patient. Home-based telemedicine systems involve the use of such technology for medical support and care connecting the patient from the comfort of their homes with the clinician. In order for such a system to be used extensively, it is necessary to understand not only the issues faced by the patients in using them but also the clinician. The aim of this study was to conduct a heuristic evaluation of 4 telemedicine software platforms-Doxy.me, Polycom, Vidyo, and VSee-to assess possible problems and limitations that could affect the usability of the system from the clinician's perspective. It was found that 5 experts individually evaluated all four systems using Nielsen's list of heuristics, classifying the issues based on a severity rating scale. A total of 46 unique problems were identified by the experts. The heuristics most frequently violated were visibility of system status and Error prevention amounting to 24% (11/46 issues) each. Esthetic and minimalist design was second contributing to 13% (6/46 issues) of the total errors. Heuristic evaluation coupled with a severity rating scale was found to be an effective method for identifying problems with the systems. Prioritization of these problems based on the rating provides a good starting point for resolving the issues affecting these platforms. There is a need for better transparency and a more streamlined approach for how physicians use telemedicine systems. Visibility of the system status and speaking the users' language are keys for achieving this. ©Sruthy Agnisarman, Shraddhaa Narasimha, Kapil Chalil Madathil, Brandon Welch, FNU Brinda, Aparna Ashok, James McElligott. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 24.04.2017.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale
ERIC Educational Resources Information Center
Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff
2004-01-01
The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…
Statistical mechanics of competitive resource allocation using agent-based models
NASA Astrophysics Data System (ADS)
Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.
2015-01-01
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
A Spatial Framework to Map Heat Health Risks at Multiple Scales.
Ho, Hung Chak; Knudby, Anders; Huang, Wei
2015-12-18
In the last few decades extreme heat events have led to substantial excess mortality, most dramatically in Central Europe in 2003, in Russia in 2010, and even in typically cool locations such as Vancouver, Canada, in 2009. Heat-related morbidity and mortality is expected to increase over the coming centuries as the result of climate-driven global increases in the severity and frequency of extreme heat events. Spatial information on heat exposure and population vulnerability may be combined to map the areas of highest risk and focus mitigation efforts there. However, a mismatch in spatial resolution between heat exposure and vulnerability data can cause spatial scale issues such as the Modifiable Areal Unit Problem (MAUP). We used a raster-based model to integrate heat exposure and vulnerability data in a multi-criteria decision analysis, and compared it to the traditional vector-based model. We then used the Getis-Ord G(i) index to generate spatially smoothed heat risk hotspot maps from fine to coarse spatial scales. The raster-based model allowed production of maps at spatial resolution, more description of local-scale heat risk variability, and identification of heat-risk areas not identified with the vector-based approach. Spatial smoothing with the Getis-Ord G(i) index produced heat risk hotspots from local to regional spatial scale. The approach is a framework for reducing spatial scale issues in future heat risk mapping, and for identifying heat risk hotspots at spatial scales ranging from the block-level to the municipality level.
Designing Cognitive Complexity in Mathematical Problem-Solving Items
ERIC Educational Resources Information Center
Daniel, Robert C.; Embretson, Susan E.
2010-01-01
Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…
A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing
USDA-ARS?s Scientific Manuscript database
A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...
Lead (Pb) in tap water (released from Pb-based plumbing materials) poses a serious public health concern. Water utilities experiencing Pb problems often use orthophosphate treatment, with the theory of forming insoluble Pb(II)-orthophosphate compounds on the pipe wall to inhibit ...
The Natural Resources Conservation Service land resource hierarchy and ecological sites
USDA-ARS?s Scientific Manuscript database
Resource areas of the NRCS have long been important to soil geography. At both regional and landscape scales, resource areas are used to stratify programs and practices based on geographical areas where resource concerns, problems, or treatment needs are similar. However, the inability to quantifiab...
Shen, Minxue; Hu, Ming; Sun, Zhenqiu
2017-01-01
Objectives To develop and validate brief scales to measure common emotional and behavioural problems among adolescents in the examination-oriented education system and collectivistic culture of China. Setting Middle schools in Hunan province. Participants 5442 middle school students aged 11–19 years were sampled. 4727 valid questionnaires were collected and used for validation of the scales. The final sample included 2408 boys and 2319 girls. Primary and secondary outcome measures The tools were assessed by the item response theory, classical test theory (reliability and construct validity) and differential item functioning. Results Four scales to measure anxiety, depression, study problem and sociality problem were established. Exploratory factor analysis showed that each scale had two solutions. Confirmatory factor analysis showed acceptable to good model fit for each scale. Internal consistency and test–retest reliability of all scales were above 0.7. Item response theory showed that all items had acceptable discrimination parameters and most items had appropriate difficulty parameters. 10 items demonstrated differential item functioning with respect to gender. Conclusions Four brief scales were developed and validated among adolescents in middle schools of China. The scales have good psychometric properties with minor differential item functioning. They can be used in middle school settings, and will help school officials to assess the students’ emotional/behavioural problems. PMID:28062469
An allometric scaling relation based on logistic growth of cities
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2014-08-01
The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Steiner, Naomi J; Sheldrick, Radley Christopher; Gotthelf, David; Perrin, Ellen C
2011-07-01
Objective. This study examined the efficacy of 2 computer-based training systems to teach children with attention deficit/hyperactivity disorder (ADHD) to attend more effectively. Design/methods. A total of 41 children with ADHD from 2 middle schools were randomly assigned to receive 2 sessions a week at school of either neurofeedback (NF) or attention training through a standard computer format (SCF), either immediately or after a 6-month wait (waitlist control group). Parents, children, and teachers completed questionnaires pre- and postintervention. Results. Primary parents in the NF condition reported significant (P < .05) change on Conners's Rating Scales-Revised (CRS-R) and Behavior Assessment Scales for Children (BASC) subscales; and in the SCF condition, they reported significant (P < .05) change on the CRS-R Inattention scale and ADHD index, the BASC Attention Problems Scale, and on the Behavioral Rating Inventory of Executive Functioning (BRIEF). Conclusion. This randomized control trial provides preliminary evidence of the effectiveness of computer-based interventions for ADHD and supports the feasibility of offering them in a school setting.
Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations
NASA Astrophysics Data System (ADS)
Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean
2017-10-01
Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.
Estimation of Time Scales in Unsteady Flows in a Turbomachinery Rig
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.
2004-01-01
Time scales in turbulent and transitional flow provide a link between experimental data and modeling, both in terms of physical content and for quantitative assessment. The problem of interest here is the definition of time scales in an unsteady flow. Using representative samples of data from GEAE low pressure turbine experiment in low speed research turbine facility with wake-induced transition, we document several methods to extract dominant frequencies, and compare the results. We show that conventional methods of time scale evaluation (based on autocorrelation functions and on Fourier spectra) and wavelet-based methods provide similar information when applied to stationary signals. We also show the greater flexibility of the wavelet-based methods when dealing with intermittent or strongly modulated data, as are encountered in transitioning boundary layers and in flows with unsteady forcing associated with wake passing. We define phase-averaged dominant frequencies that characterize the turbulence associated with freestream conditions and with the passing wakes downstream of a rotor. The relevance of these results for modeling is discussed in the paper.
2017-01-01
The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100
Corona, Giovanni; Mannucci, Edoardo; Petrone, Luisa; Ricca, Valdo; Balercia, Giancarlo; Giommi, Roberta; Forti, Gianni; Maggi, Mario
2006-01-01
Anxiety has a relevant impact on everyday life, including sexual life, and therefore is considered the final common pathway by which social, psychological, and biological stressors negatively affect sexual functioning. The aim of this study is to define the psycho-biological correlates of free-floating anxiety in a large sample of patients complaining of erectile dysfunction (ED)-based sexual problems. We studied a consecutive series of 882 ED patients using SIEDY, a 13-item structured interview, composed of 3 scales that identify and quantify organic, relational, and intrapsychic domains. MHQ-A scoring from Middlesex Hospital Questionnaire (MHQ) was used as a putative marker of free-floating anxiety symptoms (AS). Metabolic and hormonal parameters, nocturnal penile tumescence (NPT) test, and penile Doppler ultrasound (PDU) examination were also performed. MHQ-A score was significantly higher in patients complaining of difficulties in maintaining erection and in those reporting premature ejaculation (6.5 +/- 3.3 vs 5.8 +/- 3.3 and 6.6 +/- 3.3 vs 6.1 +/- 3.3, respectively; both P < .05). Moreover, ASs were significantly correlated to life stressors quantified by SIEDY scale 2 (relational component) and scale 3 (intrapsychic component) scores, as dissatisfaction at work or within the family or couple relationships. Among physical, biochemical, or instrumental parameters tested, only end-diastolic velocity at PDU was significantly (P < .05) related to ASs. In conclusion, in patients with ED-based sexual problems, ASs are correlated to many relational and life stressors. Conversely, organic problems are not necessarily associated with MHQ-A score.
Lithospheric Strength and Stress State: Persistent Challenges and New Directions in Geodynamics
NASA Astrophysics Data System (ADS)
Hirth, G.
2017-12-01
The strength of the lithosphere controls a broad array of geodynamic processes ranging from earthquakes, the formation and evolution of plate boundaries and the thermal evolution of the planet. A combination of laboratory, geologic and geophysical observations provides several independent constraints on the rheological properties of the lithosphere. However, several persistent challenges remain in the interpretation of these data. Problems related to extrapolation in both scale and time (rate) need to be addressed to apply laboratory data. Nonetheless, good agreement between extrapolation of flow laws and the interpretation of microstructures in viscously deformed lithospheric mantle rocks demonstrates a strong foundation to build on to explore the role of scale. Furthermore, agreement between the depth distribution of earthquakes and predictions based on extrapolation of high temperature friction relationships provides a basis to understand links between brittle deformation and stress state. In contrast, problems remain for rationalizing larger scale geodynamic processes with these same rheological constraints. For example, at face value the lab derived values for the activation energy for creep are too large to explain convective instabilities at the base of the lithosphere, but too low to explain the persistence of dangling slabs in the upper mantle. In this presentation, I will outline these problems (and successes) and provide thoughts on where new progress can be made to resolve remaining inconsistencies, including discussion of the role of the distribution of volatiles and alteration on the strength of the lithosphere, new data on the influence of pressure on friction and fracture strength, and links between the location of earthquakes, thermal structure, and stress state.
Leatemia, Lukas D; Susilo, Astrid P; van Berkel, Henk
2016-12-03
To identify the student's readiness to perform self-directed learning and the underlying factors influencing it on the hybrid problem based learning curriculum. A combination of quantitative and qualitative studies was conducted in five medical schools in Indonesia. In the quantitative study, the Self Directed Learning Readiness Scale was distributed to all students in all batches, who had experience with the hybrid problem based curriculum. They were categorized into low- and high -level based on the score of the questionnaire. Three focus group discussions (low-, high-, and mixed level) were conducted in the qualitative study with six to twelve students chosen randomly from each group to find the factors influencing their self-directed learning readiness. Two researchers analysed the qualitative data as a measure of triangulation. The quantitative study showed only half of the students had a high-level of self-directed learning readiness, and a similar trend also occurred in each batch. The proportion of students with a high level of self-directed learning readiness was lower in the senior students compared to more junior students. The qualitative study showed that problem based learning processes, assessments, learning environment, students' life styles, students' perceptions of the topics, and mood, were factors influencing their self-directed learning. A hybrid problem based curriculum may not fully affect the students' self-directed learning. The curriculum system, teacher's experiences, student's background and cultural factors might contribute to the difficulties for the student's in conducting self-directed learning.
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss EM
2017-01-01
Background: Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. Aims: The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Design and setting: Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Results: Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4–68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3–17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36–1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness (χ2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01–1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Conclusion: Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation. PMID:28812945
Attention Problems and Stability of WISC-IV Scores Among Clinically Referred Children.
Green Bartoi, Marla; Issner, Jaclyn Beth; Hetterscheidt, Lesley; January, Alicia M; Kuentzel, Jeffrey Garth; Barnett, Douglas
2015-01-01
We examined the stability of Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) scores among 51 diverse, clinically referred 8- to 16-year-olds (M(age) = 11.24 years, SD = 2.36). Children were referred to and tested at an urban, university-based training clinic; 70% of eligible children completed follow-up testing 12 months to 40 months later (M = 22.05, SD = 5.94). Stability for index scores ranged from .58 (Processing Speed) to .81 (Verbal Comprehension), with a stability of .86 for Full-Scale IQ. Subtest score stability ranged from .35 (Letter-Number Sequencing) to .81 (Vocabulary). Indexes believed to be more susceptible to concentration (Processing Speed and Working Memory) had lower stability. We also examined attention problems as a potential moderating factor of WISC-IV index and subtest score stability. Children with attention problems had significantly lower stability for Digit Span and Matrix Reasoning subtests compared with children without attention problems. These results provide support for the temporal stability of the WISC-IV and also provide some support for the idea that attention problems contribute to children producing less stable IQ estimates when completing the WISC-IV. We hope our report encourages further examination of this hypothesis and its implications.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
A KPI-based process monitoring and fault detection framework for large-scale processes.
Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang
2017-05-01
Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Cascade phenomenon against subsequent failures in complex networks
NASA Astrophysics Data System (ADS)
Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng
2018-06-01
Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.
Dimensionless Analysis Applied to Bacterial Chemotaxis towards NAPL Contaminants
NASA Astrophysics Data System (ADS)
Wang, X.; GAO, B.; Zhong, W.; Kihaule, K. S.; Ford, R.
2017-12-01
The use of chemotactic bacteria in bioremediation may improve the efficiency and decrease the cost of restoration, which means it has the potential to address environmental problems caused by oil spills. However, most previous studies were focused at the laboratory-scale and there lacks a formalism that can use these laboratory-scale results as input to evaluate the relative importance of chemotaxis at the field scale. In this study, a dimensionless equation is formulated to solve this problem. First, the main influential factors were extracted based on previous researches in environmental bioremediation and then five sets of dimensionless numbers were obtained according to Buckingham theory. After collecting basic parameter values and supplementary calculations to determine the concentration gradient of the chemoattractant, all dimensionless numbers were calculated and categorized into two types, those that were sensitive to chemotaxis or those to groundwater velocity. The bacteria ratio (BR), defined as the ratio of maximum bacteria concentration to its original value, was correlated with a combination of dimensionless numbers to yield, BR=cP1-0.085P20.329P30.1P4-0.098. For a bacterial ratio greater than one, the bioremediation strategy based on chemotaxis is expected to be effective, and chemotactic bacteria are expected to accumulate around NAPL contaminant sources efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu
We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
Investigating gender differences in alcohol problems: a latent trait modeling approach.
Nichol, Penny E; Krueger, Robert F; Iacono, William G
2007-05-01
Inconsistent results have been found in research investigating gender differences in alcohol problems. Previous studies of gender differences used a wide range of methodological techniques, as well as limited assortments of alcohol problems. Parents (1,348 men and 1,402 women) of twins enrolled in the Minnesota Twin Family Study answered questions about a wide range of alcohol problems. A latent trait modeling technique was used to evaluate gender differences in the probability of endorsement at the problem level and for the overall 105-problem scale. Of the 34 problems that showed significant gender differences, 29 were more likely to be endorsed by men than women with equivalent overall alcohol problem levels. These male-oriented symptoms included measures of heavy drinking, duration of drinking, tolerance, and acting out behaviors. Nineteen symptoms were denoted for removal to create a scale that favored neither gender in assessment. Significant gender differences were found in approximately one-third of the symptoms assessed and in the overall scale. Further examination of the nature of gender differences in alcohol problem symptoms should be undertaken to investigate whether a gender-neutral scale should be created or if men and women should be assessed with separate criteria for alcohol dependence and abuse.
Student reactions to problem-based learning in photonics technician education
NASA Astrophysics Data System (ADS)
Massa, Nicholas M.; Donnelly, Judith; Hanes, Fenna
2014-07-01
Problem-based learning (PBL) is an instructional approach in which students learn problem-solving and teamwork skills by collaboratively solving complex real-world problems. Research shows that PBL improves student knowledge and retention, motivation, problem-solving skills, and the ability to skillfully apply knowledge in new and novel situations. One of the challenges faced by students accustomed to traditional didactic methods, however, is acclimating to the PBL process in which problem parameters are often ill-defined and ambiguous, often leading to frustration and disengagement with the learning process. To address this problem, the New England Board of Higher Education (NEBHE), funded by the National Science Foundation Advanced Technological Education (NSF-ATE) program, has created and field tested a comprehensive series of industry-based multimedia PBL "Challenges" designed to scaffold the development of students' problem solving and critical thinking skills. In this paper, we present the results of a pilot study conducted to examine student reactions to the PBL Challenges in photonics technician education. During the fall 2012 semester, students (n=12) in two associate degree level photonics courses engaged in PBL using the PBL Challenges. Qualitative and quantitative methods were used to assess student motivation, self-efficacy, critical thinking, metacognitive self-regulation, and peer learning using selected scales from the Motivated Strategies for Learning Questionnaire (MSLQ). Results showed positive gains in all variables. Follow-up focus group interviews yielded positive themes supporting the effectiveness of PBL in developing the knowledge, skills and attitudes of photonics technicians.
Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kuo -Ling; Mehrotra, Sanjay
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving
Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.
2016-01-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844
Internet use and video gaming predict problem behavior in early adolescence.
Holtz, Peter; Appel, Markus
2011-02-01
In early adolescence, the time spent using the Internet and video games is higher than in any other present-day age group. Due to age-inappropriate web and gaming content, the impact of new media use on teenagers is a matter of public and scientific concern. Based on current theories on inappropriate media use, a study was conducted that comprised 205 adolescents aged 10-14 years (Md = 13). Individuals were identified who showed clinically relevant problem behavior according to the problem scales of the Youth Self Report (YSR). Online gaming, communicational Internet use, and playing first-person shooters were predictive of externalizing behavior problems (aggression, delinquency). Playing online role-playing games was predictive of internalizing problem behavior (including withdrawal and anxiety). Parent-child communication about Internet activities was negatively related to problem behavior. Copyright © 2010 The Association for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Innovative mathematical modeling in environmental remediation.
Yeh, Gour-Tsyh; Gwo, Jin-Ping; Siegel, Malcolm D; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steve B
2013-05-01
There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g., Ni, Cr, Co). The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu
Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.
Developing an African youth psychosocial assessment: an application of item response theory.
Betancourt, Theresa S; Yang, Frances; Bolton, Paul; Normand, Sharon-Lise
2014-06-01
This study aimed to refine a dimensional scale for measuring psychosocial adjustment in African youth using item response theory (IRT). A 60-item scale derived from qualitative data was administered to 667 war-affected adolescents (55% female). Exploratory factor analysis (EFA) determined the dimensionality of items based on goodness-of-fit indices. Items with loadings less than 0.4 were dropped. Confirmatory factor analysis (CFA) was used to confirm the scale's dimensionality found under the EFA. Item discrimination and difficulty were estimated using a graded response model for each subscale using weighted least squares means and variances. Predictive validity was examined through correlations between IRT scores (θ) for each subscale and ratings of functional impairment. All models were assessed using goodness-of-fit and comparative fit indices. Fisher's Information curves examined item precision at different underlying ranges of each trait. Original scale items were optimized and reconfigured into an empirically-robust 41-item scale, the African Youth Psychosocial Assessment (AYPA). Refined subscales assess internalizing and externalizing problems, prosocial attitudes/behaviors and somatic complaints without medical cause. The AYPA is a refined dimensional assessment of emotional and behavioral problems in African youth with good psychometric properties. Validation studies in other cultures are recommended. Copyright © 2014 John Wiley & Sons, Ltd.
Developing an African youth psychosocial assessment: an application of item response theory
BETANCOURT, THERESA S.; YANG, FRANCES; BOLTON, PAUL; NORMAND, SHARON-LISE
2014-01-01
This study aimed to refine a dimensional scale for measuring psychosocial adjustment in African youth using item response theory (IRT). A 60-item scale derived from qualitative data was administered to 667 war-affected adolescents (55% female). Exploratory factor analysis (EFA) determined the dimensionality of items based on goodness-of-fit indices. Items with loadings less than 0.4 were dropped. Confirmatory factor analysis (CFA) was used to confirm the scale's dimensionality found under the EFA. Item discrimination and difficulty were estimated using a graded response model for each subscale using weighted least squares means and variances. Predictive validity was examined through correlations between IRT scores (θ) for each subscale and ratings of functional impairment. All models were assessed using goodness-of-fit and comparative fit indices. Fisher's Information curves examined item precision at different underlying ranges of each trait. Original scale items were optimized and reconfigured into an empirically-robust 41-item scale, the African Youth Psychosocial Assessment (AYPA). Refined subscales assess internalizing and externalizing problems, prosocial attitudes/behaviors and somatic complaints without medical cause. The AYPA is a refined dimensional assessment of emotional and behavioral problems in African youth with good psychometric properties. Validation studies in other cultures are recommended. PMID:24478113
Rohrbaugh, Michael J
2014-09-01
Social cybernetic (systemic) ideas from the early Family Process era, though emanating from qualitative clinical observation, have underappreciated heuristic potential for guiding quantitative empirical research on problem maintenance and change. The old conceptual wines we have attempted to repackage in new, science-friendly bottles include ironic processes (when "solutions" maintain problems), symptom-system fit (when problems stabilize relationships), and communal coping (when we-ness helps people change). Both self-report and observational quantitative methods have been useful in tracking these phenomena, and together the three constructs inform a team-based family consultation approach to working with difficult health and behavior problems. In addition, a large-scale, quantitatively focused effectiveness trial of family therapy for adolescent drug abuse highlights the importance of treatment fidelity and qualitative approaches to examining it. In this sense, echoing the history of family therapy research, our experience with juxtaposing quantitative and qualitative methods has gone full circle-from qualitative to quantitative observation and back again. © 2014 FPI, Inc.
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252
Rohrbaugh, Michael J.
2015-01-01
Social cybernetic (systemic) ideas from the early Family Process era, though emanating from qualitative clinical observation, have underappreciated heuristic potential for guiding quantitative empirical research on problem maintenance and change. The old conceptual wines we have attempted to repackage in new, science-friendly bottles include ironic processes (when “solutions” maintain problems), symptom-system fit (when problems stabilize relationships), and communal coping (when we-ness helps people change). Both self-report and observational quantitative methods have been useful in tracking these phenomena, and together the three constructs inform a team-based family consultation (FAMCON) approach to working with difficult health and behavior problems. In addition, a large-scale, quantitatively focused effectiveness trial of family therapy for adolescent drug abuse highlights the importance of treatment fidelity and qualitative approaches to examining it. In this sense, echoing the history of family therapy research, our experience with juxtaposing quantitative and qualitative methods has gone full circle – from qualitative to quantitative observation and back again. PMID:24905101
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V; Goldberg, H S; Jones, P; Safran, C
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
de la Osa, Nuria; Granero, Roser; Trepat, Esther; Domenech, Josep Maria; Ezpeleta, Lourdes
2016-01-01
This paper studies the discriminative capacity of CBCL/1½-5 (Manual for the ASEBA Preschool-Age Forms & Profiles, University of Vermont, Research Center for Children, Youth, & Families, Burlington, 2000) DSM5 scales attention deficit and hyperactivity disorder (ADHD), oppositional defiant disorder (ODD), anxiety and depressive problems for detecting the presence of DSM5 (DSM5 diagnostic and statistical manual of mental disorders, APA, Arlington, 2013) disorders, ADHD, ODD, Anxiety and Mood disorders, assessed through diagnostic interview, in children aged 3-5. Additionally, we compare the clinical utility of the CBCL/1½-5-DSM5 scales with respect to analogous CBCL/1½-5 syndrome scales. A large community sample of 616 preschool children was longitudinally assessed for the stated age group. Statistical analysis was based on ROC procedures and binary logistic regressions. ADHD and ODD CBCL/1½-5-DSM5 scales achieved good discriminative ability to identify ADHD and ODD interview's diagnoses, at any age. CBCL/1½-5-DSM5 Anxiety scale discriminative capacity was fair for unspecific anxiety disorders in all age groups. CBCL/1½-5-DSM5 depressive problems' scale showed the poorest discriminative capacity for mood disorders (including depressive episode with insufficient symptoms), oscillating into the poor-to-fair range. As a whole, DSM5-oriented scales generally did not provide evidence better for discriminative capacity than syndrome scales in identifying DSM5 diagnoses. CBCL/1½-5-DSM5 scales discriminate externalizing disorders better than internalizing disorders for ages 3-5. Scores on the ADHD and ODD CBCL/1½-5-DSM5 scales can be used to screen for DSM5 ADHD and ODD disorders in general populations of preschool children.
Small-Scale Hydroelectric Power in the Southwest: New Impetus for an old Energy Source
NASA Astrophysics Data System (ADS)
1980-06-01
A forum was provided for state legislators and other interested persons to discuss the problems facing small scale hydro developers, and to recommend appropriate solutions to resolve those problems. Alternative policy options were recommended for consideration by both state and federal agencies. Emphasis was placed on the legal, institutional, environmental and economic barriers at the state level, as well as the federal delays associated with licensing small scale hydro projects. Legislative resolution of the problems and delays in small scale hydro licensing and development were also stressed.
1984-06-01
RD-Rl45 988 AQUATIC PLANT CONTROL RESEARCH PROGRAM LARGE-SCALE 1/2 OPERATIONS MANAGEMENT ..(U) ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG MS...REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR -, CONTROL OF PROBLEM AQUATIC PLANTS Report 5 SYNTHESIS REPORT bv Andrew...Corps of Engineers Washington, DC 20314 84 0,_1 oil.. LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
Neurocognitive dysfunction in problem gamblers with co-occurring antisocial personality disorder.
Blum, Austin W; Leppink, Eric W; Grant, Jon E
2017-07-01
Problem gamblers with symptoms of antisocial personality disorder (ASPD) may represent a distinct problem gambling subtype, but the neurocognitive profile of individuals affected by both disorders is poorly characterized. Non-treatment-seeking young adults (18-29years) who gambled ≥5 times in the preceding year were recruited from the general community. Problem gamblers (defined as those meeting ≥1 DSM-5 diagnostic criteria for gambling disorder) with a lifetime history of ASPD (N=26) were identified using the Mini International Neuropsychiatric Interview (MINI) and compared with controls (N=266) using questionnaire-based impulsivity scales and objective computerized neuropsychological tasks. Findings were uncorrected for multiple comparisons. Effect sizes were calculated using Cohen's d. Problem gambling with ASPD was associated with significantly elevated gambling disorder symptoms, lower quality of life, greater psychiatric comorbidity, higher impulsivity questionnaire scores on the Barratt Impulsiveness Scale (d=0.4) and Eysenck Impulsivity Questionnaire (d=0.5), and impaired cognitive flexibility (d=0.4), executive planning (d=0.4), and an aspect of decision-making (d=0.6). Performance on measures of response inhibition, risk adjustment, and quality of decision making did not differ significantly between groups. These preliminary findings, though in need of replication, support the characterization of problem gambling with ASPD as a subtype of problem gambling associated with higher rates of impulsivity and executive function deficits. Taken together, these results may have treatment implications. Copyright © 2017 Elsevier Inc. All rights reserved.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Behavior analytic approaches to problem behavior in intellectual disabilities.
Hagopian, Louis P; Gregory, Meagan K
2016-03-01
The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.
Wang, Su-Chin; Yu, Ching-Len; Chang, Su-Hsien
2017-02-01
The purpose was to examine the effectiveness of music care on cognitive function, depression, and behavioral problems among elderly people with dementia in long-term care facilities in Taiwan. The study had a quasi-experimental, longitudinal research design and used two groups of subjects. Subjects were not randomly assigned to experimental group (n = 90) or comparison group (n = 56). Based on Bandura's social cognition theory, subjects in the experimental group received Kagayashiki music care (KMC) twice per week for 24 weeks. Subjects in the comparison group were provided with activities as usual. Results found, using the control score of the Clifton Assessment Procedures for the Elderly Behavior Rating Scale (baseline) and time of attending KMC activities as a covariate, the two groups of subjects had statistically significant differences in the mini-mental state examination (MMSE). Results also showed that, using the control score of the Cornell Scale for Depression in Dementia (baseline) and MMSE (baseline) as a covariate, the two groups of subjects had statistically significant differences in the Clifton Assessment Procedures for the Elderly Behavior Rating Scale. These findings provide information for staff caregivers in long-term care facilities to develop a non-invasive care model for elderly people with dementia to deal with depression, anxiety, and behavioral problems.
Huang, Jin; Vaughn, Michael G.
2016-01-01
This study examined the association between household food insecurity (insufficient access to adequate and nutritious food) and trajectories of externalising and internalising behaviour problems in children from kindergarten to fifth grade using longitudinal data from the Early Childhood Longitudinal Study—Kindergarten Cohort (ECLS-K), a nationally representative study in the USA. Household food insecurity was assessed using the eighteen-item standard food security scale, and children's behaviour problems were reported by teachers. Latent growth curve analysis was conducted on 7,348 children in the ECLS-K, separately for boys and girls. Following adjustment for an extensive array of confounding variables, results suggest that food insecurity generally was not associated with developmental change in children's behaviour problems. The impact of food insecurity on behaviour problems may be episodic or interact with certain developmental stages. PMID:27559210
Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J
2011-01-01
Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Orthogonalizing EM: A design-based least squares algorithm.
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.
NASA Astrophysics Data System (ADS)
Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela
2015-10-01
An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.
Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.
Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk
2015-01-01
Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.
Ertekin Pinar, Sukran; Yildirim, Gulay; Sayin, Neslihan
2018-05-01
The high level of psychological resilience, self-confidence and problem solving skills of midwife candidates play an important role in increasing the quality of health care and in fulfilling their responsibilities towards patients. This study was conducted to investigate the psychological resilience, self-confidence and problem-solving skills of midwife candidates. It is a convenience descriptive quantitative study. Students who study at Health Sciences Faculty in Turkey's Central Anatolia Region. Midwife candidates (N = 270). In collection of data, the Personal Information Form, Psychological Resilience Scale for Adults (PRSA), Self-Confidence Scale (SCS), and Problem Solving Inventory (PSI) were used. There was a negatively moderate-level significant relationship between the Problem Solving Inventory scores and the Psychological Resilience Scale for Adults scores (r = -0.619; p = 0.000), and between Self-Confidence Scale scores (r = -0.524; p = 0.000). There was a positively moderate-level significant relationship between the Psychological Resilience Scale for Adults scores and the Self-Confidence Scale scores (r = 0.583; p = 0.000). There was a statistically significant difference (p < 0.05) between the Problem Solving Inventory and the Psychological Resilience Scale for Adults scores according to getting support in a difficult situation. As psychological resilience and self-confidence levels increase, problem-solving skills increase; additionally, as self-confidence increases, psychological resilience increases too. Psychological resilience, self-confidence, and problem-solving skills of midwife candidates in their first-year of studies are higher than those who are in their fourth year. Self-confidence and psychological resilience of midwife candidates aged between 17 and 21, self-confidence and problem solving skills of residents of city centers, psychological resilience of those who perceive their monthly income as sufficient are high. Psychological resilience and problem-solving skills for midwife candidates who receive social support are also high. The fact that levels of self-confidence, problem-solving skills and psychological resilience of fourth-year students are found to be low presents a situation that should be taken into consideration. Copyright © 2018 Elsevier Ltd. All rights reserved.
The effects of psychotherapy on behavior problems of sexually abused deaf children.
Sullivan, P M; Scanlan, J M; Brookhouser, P E; Schulte, L E; Knutson, J F
1992-01-01
This study assessed the effectiveness of a broad based psychotherapeutic intervention with a sample of 72 children sexually abused at a residential school for the deaf. An untreated comparison group emerged when about half of their parents refused the offer for psychotherapy provided by the school. Treated and untreated children were randomly assigned to two assessment groups: those who participated in a pretreatment assessment and those who did not. Houseparents at the residential school used the Child Behavior Checklist (CBC) to rate the pretreatment assessment children before treatment and all 72 children one year after the implementation of psychotherapy. Children receiving therapy had significantly fewer behavior problems than children not receiving therapy. There was a differential response to therapy on the basis of sex. Boys receiving therapy had significantly lower scores on the following CBC scales than the no treatment group: Total, Internal, External, Somatic, Uncommunicative, Immature, Hostile, Delinquent, Aggressive, and Hyperactive. There were no differences on the Schizoid and Obsessive scales. Girls receiving therapy had significantly lower scores than the no treatment group on the following CBC scales: Total, External, Depressed, Aggressive, and Cruel. There were no differences on the Internal, Anxious, Schizoid, Immature, Somatic, and Delinquent scales.
Social Media Use and Episodic Heavy Drinking Among Adolescents.
Brunborg, Geir Scott; Andreas, Jasmina Burdzovic; Kvaavik, Elisabeth
2017-06-01
Objectives Little is known about the consequences of adolescent social media use. The current study estimated the association between the amount of time adolescents spend on social media and the risk of episodic heavy drinking. Methods A school-based self-report cross-sectional study including 851 Norwegian middle and high school students (46.1% boys). frequency and quantity of social media use. Frequency of drinking four or six (girls and boys, respectively) alcoholic drinks during a single day (episodic heavy drinking). The MacArthur Scale of Subjective Social Status, the Barratt Impulsiveness Scale - Brief, the Brief Sensation Seeking Scale, the Patient Health Questionnaire-9 items for Adolescents, the Strengths and Difficulties Questionnaire Peer Relationship problems scale, gender, and school grade. Results Greater amount of time spent on social media was associated with greater likelihood of episodic heavy drinking among adolescents ( OR = 1.12, 95% CI (1.05, 1.19), p = 0.001), even after adjusting for school grade, impulsivity, sensation seeking, symptoms of depression, and peer relationship problems. Conclusion The results from the current study indicate that more time spent on social media is related to greater likelihood of episodic heavy drinking among adolescents.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Abdulmalik, Jibril; Ani, Cornelius; Ajuwon, Ademola J; Omigbodun, Olayinka
2016-01-01
Aggressive patterns of behavior often start early in childhood, and tend to remain stable into adulthood. The negative consequences include poor academic performance, disciplinary problems and encounters with the juvenile justice system. Early school intervention programs can alter this trajectory for aggressive children. However, there are no studies evaluating the feasibility of such interventions in Africa. This study therefore, assessed the effect of group-based problem-solving interventions on aggressive behaviors among primary school pupils in Ibadan, Nigeria. This was an intervention study with treatment and wait-list control groups. Two public primary schools in Ibadan Nigeria were randomly allocated to an intervention group and a waiting list control group. Teachers rated male Primary five pupils in the two schools on aggressive behaviors and the top 20 highest scorers in each school were selected. Pupils in the intervention school received 6 twice-weekly sessions of group-based intervention, which included problem-solving skills, calming techniques and attribution retraining. Outcome measures were; teacher rated aggressive behaviour (TRAB), self-rated aggression scale (SRAS), strengths and difficulties questionnaire (SDQ), attitude towards aggression questionnaire (ATAQ), and social cognition and attribution scale (SCAS). The participants were aged 12 years (SD = 1.2, range 9-14 years). Both groups had similar socio-demographic backgrounds and baseline measures of aggressive behaviors. Controlling for baseline scores, the intervention group had significantly lower scores on TRAB and SRAS 1-week post intervention with large Cohen's effect sizes of 1.2 and 0.9 respectively. The other outcome measures were not significantly different between the groups post-intervention. Group-based problem solving intervention for aggressive behaviors among primary school students showed significant reductions in both teachers' and students' rated aggressive behaviours with large effect sizes. However, this was a small exploratory trial whose findings may not be generalizable, but it demonstrates that psychological interventions for children with high levels of aggressive behaviour are feasible and potentially effective in Nigeria.
Gyrodampers for large space structures
NASA Technical Reports Server (NTRS)
Aubrun, J. N.; Margulies, G.
1979-01-01
The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.
[Methods of high-throughput plant phenotyping for large-scale breeding and genetic experiments].
Afonnikov, D A; Genaev, M A; Doroshkov, A V; Komyshev, E G; Pshenichnikova, T A
2016-07-01
Phenomics is a field of science at the junction of biology and informatics which solves the problems of rapid, accurate estimation of the plant phenotype; it was rapidly developed because of the need to analyze phenotypic characteristics in large scale genetic and breeding experiments in plants. It is based on using the methods of computer image analysis and integration of biological data. Owing to automation, new approaches make it possible to considerably accelerate the process of estimating the characteristics of a phenotype, to increase its accuracy, and to remove a subjectivism (inherent to humans). The main technologies of high-throughput plant phenotyping in both controlled and field conditions, their advantages and disadvantages, and also the prospects of their use for the efficient solution of problems of plant genetics and breeding are presented in the review.
NASA Astrophysics Data System (ADS)
Miyazaki, Kazuteru; Tsuboi, Sougo; Kobayashi, Shigenobu
The purpose of reinforcement learning is to learn an optimal policy in general. However, in 2-players games such as the othello game, it is important to acquire a penalty avoiding policy. In this paper, we focus on formation of a penalty avoiding policy based on the Penalty Avoiding Rational Policy Making algorithm [Miyazaki 01]. In applying it to large-scale problems, we are confronted with the curse of dimensionality. We introduce several ideas and heuristics to overcome the combinational explosion in large-scale problems. First, we propose an algorithm to save the memory by calculation of state transition. Second, we describe how to restrict exploration by two type knowledge; KIFU database and evaluation funcion. We show that our learning player can always defeat against the well-known othello game program KITTY.
Scaling cosmology with variable dark-energy equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, David R.; Velten, Hermano; Zimdahl, Winfried, E-mail: drodriguez-ufes@hotmail.com, E-mail: velten@physik.uni-bielefeld.de, E-mail: winfried.zimdahl@pq.cnpq.br
2012-06-01
Interactions between dark matter and dark energy which result in a power-law behavior (with respect to the cosmic scale factor) of the ratio between the energy densities of the dark components (thus generalizing the ΛCDM model) have been considered as an attempt to alleviate the cosmic coincidence problem phenomenologically. We generalize this approach by allowing for a variable equation of state for the dark energy within the CPL-parametrization. Based on analytic solutions for the Hubble rate and using the Constitution and Union2 SNIa sets, we present a statistical analysis and classify different interacting and non-interacting models according to the Akaikemore » (AIC) and the Bayesian (BIC) information criteria. We do not find noticeable evidence for an alleviation of the coincidence problem with the mentioned type of interaction.« less
NASA Astrophysics Data System (ADS)
Yang, Xue; Sun, Hao; Fu, Kun; Yang, Jirui; Sun, Xian; Yan, Menglong; Guo, Zhi
2018-01-01
Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has a state-of-the-art performance.
The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.
2013-07-01
The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
The evolutionary origin of feathers.
Regal, P J
1975-03-01
Previous theories relating the origin of feathers to flight or to heat conservation are considered to be inadequate. There is need for a model of feather evolution that gives attention to the function and adaptive advantage of intermediate structures. The present model attempts to reveal and to deal with, the spectrum of complex questions that must be considered. In several genera of modern lizards, scales are elongated in warm climates. It is argued that these scales act as small shields to solar radiation. Experiments are reported that tend to confirm this. Using lizards as a conceptual model, it is argued that feathers likewise arose as adaptations to intense solar radiation. Elongated scales are assumed to have subdivided into finely branched structures that produced a heat-shield, flexible as well as long and broad. Associated muscles had the function of allowing the organism fine control over rates of heat gain and loss: the specialized scales or early feathers could be moved to allow basking in cool weather or protection in hot weather. Subdivision of the scales also allowed a close fit between the elements of the insulative integument. There would have been mechanical and thermal advantages to having branches that interlocked into a pennaceous structure early in evolution, so the first feathers may have been pennaceous. A versatile insulation of movable, branched scales would have been a preadaptation for endothermy. As birds took to the air they faced cooling problems despite their insulative covering because of high convective heat loss. Short glides may have initially been advantageous in cooling an animal under heat stress, but at some point the problem may have shifted from one of heat exclusion to one of heat retention. Endothermy probably evolved in conjunction with flight. If so, it is an unnecessary assumption to postulate that the climate cooled and made endothermy advantageous. The development of feathers is complex and a model is proposed that gives attention to the fundamental problems of deriving a branched structure with a cylindrical base from an elongated scale.
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System
NASA Astrophysics Data System (ADS)
Yue, Liu; Hang, Mend
2018-01-01
With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.
Emotion dysregulation, problem-solving, and hopelessness.
Vatan, Sevginar; Lester, David; Gunn, John F
2014-04-01
A sample of 87 Turkish undergraduate students was administered scales to measure hopelessness, problem-solving skills, emotion dysregulation, and psychiatric symptoms. All of the scores from these scales were strongly associated. In a multiple regression, hopelessness scores were predicted by poor problem-solving skills and emotion dysregulation.
Scaling of Attitudes Toward Population Problems
ERIC Educational Resources Information Center
Watkins, George A.
1975-01-01
This study related population problem attitudes and socioeconomic variables. Six items concerned with number of children, birth control, family, science, economic depression, and overpopulation were selected for a Guttman scalogram. Education, occupation, and number of children were correlated with population problems scale scores; marital status,…
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
Using the PORS Problems to Examine Evolutionary Optimization of Multiscale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinhart, Zachary; Molian, Vaelan; Bryden, Kenneth
2013-01-01
Nearly all systems of practical interest are composed of parts assembled across multiple scales. For example, an agrodynamic system is composed of flora and fauna on one scale; soil types, slope, and water runoff on another scale; and management practice and yield on another scale. Or consider an advanced coal-fired power plant: combustion and pollutant formation occurs on one scale, the plant components on another scale, and the overall performance of the power system is measured on another. In spite of this, there are few practical tools for the optimization of multiscale systems. This paper examines multiscale optimization of systemsmore » composed of discrete elements using the plus-one-recall-store (PORS) problem as a test case or study problem for multiscale systems. From this study, it is found that by recognizing the constraints and patterns present in discrete multiscale systems, the solution time can be significantly reduced and much more complex problems can be optimized.« less
Cross-cultural adaptation and validation to Brazil of the Obesity-related Problems Scale.
Brasil, Andreia Mara Brolezzi; Brasil, Fábio; Maurício, Angélica Aparecida; Vilela, Regina Maria
2017-01-01
To validate a reliable version of the Obesity-related Problems Scale in Portuguese to use it in Brazil. The Obesity-related Problems Scale was translated and transculturally adapted. Later it was simultaneously self-applied with a 12-item version of the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), to 50 obese patients and 50 non-obese individuals, and applied again to half of them after 14 days. The Obesity-related Problems scale was able to differentiate obese from non-obese individuals with higher accuracy than WHODAS 2.0, correlating with this scale and with body mass index. The factor analysis determined a two-dimensional structure, which was confirmed with χ2/df=1.81, SRMR=0.05, and CFI=0.97. The general a coefficient was 0.90 and the inter-item intra-class correlation, in the reapplication, ranged from 0.75 to 0.87. The scale proved to be valid and reliable for use in the Brazilian population, without the need to exclude items.
Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh P.; Kohrt, Brandon A.; Jordans, Mark J. D.; Haroz, Emily; Wissow, Lawrence
2017-01-01
Systematic processes are needed to develop valid measurement instruments for disruptive behavior disorders (DBDs) in cross-cultural settings. We employed a four-step process in Nepal to identify and select items for a culturally valid assessment instrument: 1) We extracted items from validated scales and local free-list interviews. 2) Parents, teachers, and peers (n=30) rated the perceived relevance and importance of behavior problems. 3) Highly rated items were piloted with children (n=60) in Nepal. 4) We evaluated internal consistency of the final scale. We identified 49 symptoms from 11 scales, and 39 behavior problems from free-list interviews (n=72). After dropping items for low ratings of relevance and severity and for poor item-test correlation, low frequency, and/or poor acceptability in pilot testing, 16 items remained for the Disruptive Behavior International Scale—Nepali version (DBIS-N). The final scale had good internal consistency (α=0.86). A 4-step systematic approach to scale development including local participation yielded an internally consistent scale that included culturally relevant behavior problems. PMID:28093575
García-Tornel Florensa, S; Calzada, E J; Eyberg, S M; Mas Alguacil, J C; Vilamala Serra, C; Baraza Mendoza, C; Villena Collado, H; González García, M; Calvo Hernández, M; Trinxant Doménech, A
1998-05-01
Taking into account the high prevalence of behavioral problems in the pediatric outpatient clinic, a need for a useful and easy to administer tool for the evaluation of this problem arises. The psychometric characteristics of the Spanish version of the Eyberg Behavioral Child Inventory (EBCI), [in Spanish Inventario de Eyberg para el Comportamiento de Niño (IECN)], a 36-item questionnaire were established. The ECBI inventory/questionnaire was translated into Spanish. The basis of the ECBI is the evaluation of the child's behavior through the parents' answers to the questionnaire. Healthy children between 2 and 12 years of age were included and were taken from pediatric outpatient clinics from urban and suburban areas of Barcelona and from our hospital's own ambulatory clinic. The final sample included 518 subjects. The mean score on the intensity scale was 96.8 and on the problem scale 3.9. Internal consistency (Cronbach's alpha) was 0.73 and the test-retest had an r of 0.89 (p < 0.001) for the intensity scale and r = 0.93 (p < 0.001) for the problem scale. Interrater reliability for the intensity scale was r = 0.58 (p < 0.001) and r = 0.32 (p < 0.001) for the problem scale. Concurrent validity between both scales was r = 0.343 (p < 0.001). The IECN is a useful and easy tool to apply in the pediatrician's office as a method for early detection of behavior problems.
Quantitative analysis of nano-pore geomaterials and representative sampling for digital rock physics
NASA Astrophysics Data System (ADS)
Yoon, H.; Dewers, T. A.
2014-12-01
Geomaterials containing nano-pores (e.g., shales and carbonate rocks) have become increasingly important for emerging problems such as unconventional gas and oil resources, enhanced oil recovery, and geologic storage of CO2. Accurate prediction of coupled geophysical and chemical processes at the pore scale requires realistic representation of pore structure and topology. This is especially true for chalk materials, where pore networks are small and complex, and require characterization at sub-micron scale. In this work, we apply laser scanning confocal microscopy to characterize pore structures and microlithofacies at micron- and greater scales and dual focused ion beam-scanning electron microscopy (FIB-SEM) for 3D imaging of nanometer-to-micron scale microcracks and pore distributions. With imaging techniques advanced for nano-pore characterization, a problem of scale with FIB-SEM images is how to take nanometer scale information and apply it to the thin-section or larger scale. In this work, several texture characterization techniques including graph-based spectral segmentation, support vector machine, and principal component analysis are applied for segmentation clusters represented by 1-2 FIB-SEM samples per each cluster. Geometric and topological properties are analyzed and lattice-Boltzmann method (LBM) is used to obtain permeability at several different scales. Upscaling of permeability to the Darcy scale (e.g., the thin-section scale) with image dataset will be discussed with emphasis on understanding microfracture-matrix interaction, representative volume for FIB-SEM sampling, and multiphase flow and reactive transport. Funding from the DOE Basic Energy Sciences Geosciences Program is gratefully acknowledged. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
SNR enhancement for downhole microseismic data based on scale classification shearlet transform
NASA Astrophysics Data System (ADS)
Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili
2018-06-01
Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.
Yoon, Bo Young; Choi, Ikseon; Choi, Seokjin; Kim, Tae-Hee; Roh, Hyerin; Rhee, Byoung Doo; Lee, Jong-Tae
2016-06-01
The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges
2012-05-01
A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.