Ergül, Özgür
2011-11-01
Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.
Large variable conductance heat pipe. Transverse header
NASA Technical Reports Server (NTRS)
Edelstein, F.
1975-01-01
The characteristics of gas-loaded, variable conductance heat pipes (VCHP) are discussed. The difficulties involved in developing a large VCHP header are analyzed. The construction of the large capacity VCHP is described. A research project to eliminate some of the problems involved in large capacity VCHP operation is explained.
Enhancing Large-Group Problem-Based Learning in Veterinary Medical Education.
ERIC Educational Resources Information Center
Pickrell, John A.
This project for large-group, problem-based learning at Kansas State University College of Veterinary Medicine developed 47 case-based videotapes that are used to model clinical conditions and also involved veterinary practitioners to formulate true practice cases into student learning opportunities. Problem-oriented, computer-assisted diagnostic…
OPPORTUNITY COSTS OF RESIDENTIAL BEST MANAGEMENT PRACTICES FOR STORMWATER RUNOFF CONTROL
Excess stormwater runoff is a serious problem in a large number of urban areas, causing flooding, water pollution, groundwater recharge deficits and ecological damage to urban streams. Solutions currently proposed to deal with this problem often involve large centralized infrastr...
SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1986-01-01
Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.
Comments on the problem of turbulence in aviation
NASA Technical Reports Server (NTRS)
Mclean, James C., Jr.
1987-01-01
The problem of turbulence since the beginning of aviation is traced. The problem was not cured by high altitude flight and was acerbated by the downbursts associated with thunderstorms. The accidents that occurred during the period 1982 to 1984 are listed. From this is extracted the weather related accidents. Turbulence accounts for 24% of the accidents involving large commercial carriers and 54% of the weather related accidents. In spite of all the efforts to improve the forecasting and detection of turbulence, the problem is still a large one.
Planning meals: Problem-solving on a real data-base
ERIC Educational Resources Information Center
Byrne, Richard
1977-01-01
Planning the menu for a dinner party, which involves problem-solving with a large body of knowledge, is used to study the daily operation of human memory. Verbal protocol analysis, a technique devised to investigate formal problem-solving, is examined theoretically and adapted for analysis of this task. (Author/MV)
Introduction to bioinformatics.
Can, Tolga
2014-01-01
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Current scientific and practical problems in restricting the growth of large cities in the USSR.
Khorev, B
1984-12-01
Current problems involved in restricting the growth of large cities in the USSR are examined using the example of Moscow and its suburban areas. Government policies concerning urbanization and the distribution of the labor force throughout the country are discussed. The use of various types of incentives to regulate the economy and the labor force is suggested.
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Finding common ground in large carnivore conservation: mapping contending perspectives
Mattson, D.J.; Byrd, K.L.; Rutherford, M.B.; Brown, S.R.; Clark, T.W.
2006-01-01
Reducing current conflict over large carnivore conservation and designing effective strategies that enjoy broad public support depend on a better understanding of the values, beliefs, and demands of those who are involved or affected. We conducted a workshop attended by diverse participants involved in conservation of large carnivores in the northern U.S. Rocky Mountains, and used Q methodology to elucidate participant perspectives regarding "problems" and "solutions". Q methodology employs qualitative and quantitative techniques to reveal the subjectivity in any situation. We identified four general perspectives for both problems and solutions, three of which (Carnivore Advocates, Devolution Advocates, and Process Reformers) were shared by participants across domains. Agency Empathizers (problems) and Economic Pragmatists (solutions) were not clearly linked. Carnivore and Devolution Advocates expressed diametrically opposed perspectives that legitimized different sources of policy-relevant information ("science" for Carnivore Advocates and "local knowledge" for Devolution Advocates). Despite differences, we identified potential common ground focused on respectful, persuasive, and creative processes that would build understanding and tolerance. ?? 2006 Elsevier Ltd. All rights reserved.
Finite element modeling of electromagnetic fields and waves using NASTRAN
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.; Schroeder, Erwin
1989-01-01
The various formulations of Maxwell's equations are reviewed with emphasis on those formulations which most readily form analogies with Navier's equations. Analogies involving scalar and vector potentials and electric and magnetic field components are presented. Formulations allowing for media with dielectric and conducting properties are emphasized. It is demonstrated that many problems in electromagnetism can be solved using the NASTRAN finite element code. Several fundamental problems involving time harmonic solutions of Maxwell's equations with known analytic solutions are solved using NASTRAN to demonstrate convergence and mesh requirements. Mesh requirements are studied as a function of frequency, conductivity, and dielectric properties. Applications in both low frequency and high frequency are highlighted. The low frequency problems demonstrate the ability to solve problems involving media inhomogeneity and unbounded domains. The high frequency applications demonstrate the ability to handle problems with large boundary to wavelength ratios.
ERIC Educational Resources Information Center
DOLBY, J.L.; AND OTHERS
THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Space Industrialization: The Mirage of Abundance.
ERIC Educational Resources Information Center
Deudney, Daniel
1982-01-01
Large-scale space industrialization is not a viable solution to the population, energy, and resource problems of earth. The expense and technological difficulties involved in the development and maintenance of space manufacturing facilities, space colonies, and large-scale satellites for solar power are discussed. (AM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu
We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
Computers as an Instrument for Data Analysis. Technical Report No. 11.
ERIC Educational Resources Information Center
Muller, Mervin E.
A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…
Cyberbullying, depression, and problem alcohol use in female college students: a multisite study.
Selkie, Ellen M; Kota, Rajitha; Chan, Ya-Fen; Moreno, Megan
2015-02-01
Cyberbullying and its effects have been studied largely in middle and high school students, but less is known about cyberbullying in college students. This cross-sectional study investigated the relationship between involvement in cyberbullying and depression or problem alcohol use among college females. Two hundred and sixty-five female students from four colleges completed online surveys assessing involvement in cyberbullying behaviors. Participants also completed the Patient Health Questionnaire-9 (PHQ-9) to assess depressive symptoms and the Alcohol Use Disorder Identification Test (AUDIT) to assess problem drinking. Logistic regression tested associations between involvement in cyberbullying and either depression or problem drinking. Results indicated that 27% of participants had experienced cyberbullying in college; 17.4% of all participants met the criteria for depression (PHQ-9 score ≥10), and 37.5% met the criteria for problem drinking (AUDIT score ≥8). Participants with any involvement in cyberbullying had increased odds of depression. Those involved in cyberbullying as bullies had increased odds of both depression and problem alcohol use. Bully/victims had increased odds of depression. The four most common cyberbullying behaviors were also associated with increased odds for depression, with the highest odds among those who had experienced unwanted sexual advances online or via text message. Findings indicate that future longitudinal study of cyberbullying and its effects into late adolescence and young adulthood could contribute to the prevention of associated comorbidities in this population.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
The Association of DRD2 with Insight Problem Solving.
Zhang, Shun; Zhang, Jinghuan
2016-01-01
Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene ( DRD2 ) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings.
The Association of DRD2 with Insight Problem Solving
Zhang, Shun; Zhang, Jinghuan
2016-01-01
Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene (DRD2) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings. PMID:27933030
NASA Technical Reports Server (NTRS)
Nakajima, Yukio; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element methodology is formulated to handle traveling load problems involving large deformation fields in structure composed of viscoelastic media. The main thrust of this paper is to develop an overall finite element methodology and associated solution algorithms to handle the transient aspects of moving problems involving contact impact type loading fields. Based on the methodology and algorithms formulated, several numerical experiments are considered. These include the rolling/sliding impact of tires with road obstructions.
The NASA/Baltimore Applications Project: An experiment in technology transfer
NASA Technical Reports Server (NTRS)
Golden, T. S.
1981-01-01
Conclusions drawn from the experiment thus far are presented. The problems of a large city most often do not require highly sophisticated solutions; the simpler the solution, the better. A problem focused approach is a greater help to the city than a product focused approach. Most problem situations involve several individuals or organized groups within the city. Mutual trust and good interpersonal relationships between the technologist and the administrator is as important for solving problems as technological know-how.
ERIC Educational Resources Information Center
Schlenker, Richard M.; And Others
Information is presented about the problems involved in using sea water in the steam propulsion systems of large, modern ships. Discussions supply background chemical information concerning the problems of corrosion, scale buildup, and sludge production. Suggestions are given for ways to maintain a good water treatment program to effectively deal…
Getting Along: Negotiating Authority in High Schools. Final Report.
ERIC Educational Resources Information Center
Farrar, Eleanor; Neufeld, Barbara
Appropriate responses to the authority problem in schools can be informed by a more complex understanding of the issue. Also of importance is knowledge of the ways in which schools and society at large are involved with both the creation of and the solution to the problem of student/teacher authority relations. School people are referring…
ERIC Educational Resources Information Center
Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente
2009-01-01
This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…
Newton Methods for Large Scale Problems in Machine Learning
ERIC Educational Resources Information Center
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Elliott, Luther; Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-12-01
AIMS: To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. DESIGN: Nationally representative survey. SETTING: Online. PARTICIPANTS: Large sample (n=3,380) of adult video gamers in the US. MEASUREMENTS: Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. FINDINGS: Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. CONCLUSIONS: Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of "addictive" video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play.
Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-01-01
Aims To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. Design Nationally representative survey. Setting Online. Participants Large sample (n=3,380) of adult video gamers in the US. Measurements Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. Findings Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. Conclusions Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of “addictive” video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play. PMID:23284310
Efficient Control Law Simulation for Multiple Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.
1998-10-06
In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less
Cyberbullying, Depression, and Problem Alcohol Use in Female College Students: A Multisite Study
Kota, Rajitha; Chan, Ya-Fen; Moreno, Megan
2015-01-01
Abstract Cyberbullying and its effects have been studied largely in middle and high school students, but less is known about cyberbullying in college students. This cross-sectional study investigated the relationship between involvement in cyberbullying and depression or problem alcohol use among college females. Two hundred and sixty-five female students from four colleges completed online surveys assessing involvement in cyberbullying behaviors. Participants also completed the Patient Health Questionnaire-9 (PHQ-9) to assess depressive symptoms and the Alcohol Use Disorder Identification Test (AUDIT) to assess problem drinking. Logistic regression tested associations between involvement in cyberbullying and either depression or problem drinking. Results indicated that 27% of participants had experienced cyberbullying in college; 17.4% of all participants met the criteria for depression (PHQ-9 score ≥10), and 37.5% met the criteria for problem drinking (AUDIT score ≥8). Participants with any involvement in cyberbullying had increased odds of depression. Those involved in cyberbullying as bullies had increased odds of both depression and problem alcohol use. Bully/victims had increased odds of depression. The four most common cyberbullying behaviors were also associated with increased odds for depression, with the highest odds among those who had experienced unwanted sexual advances online or via text message. Findings indicate that future longitudinal study of cyberbullying and its effects into late adolescence and young adulthood could contribute to the prevention of associated comorbidities in this population. PMID:25684608
Human factors in air traffic control: problems at the interfaces.
Shouksmith, George
2003-10-01
The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.
Reformulation of the covering and quantizer problems as ground states of interacting particles.
Torquato, S
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
Reformulation of the covering and quantizer problems as ground states of interacting particles
NASA Astrophysics Data System (ADS)
Torquato, S.
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
DEP : a computer program for evaluating lumber drying costs and investments
Stewart Holmes; George B. Harpole; Edward Bilek
1983-01-01
The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...
Approaches to eliminate waste and reduce cost for recycling glass.
Chao, Chien-Wen; Liao, Ching-Jong
2011-12-01
In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.
A survey of automated methods for sensemaking support
NASA Astrophysics Data System (ADS)
Llinas, James
2014-05-01
Complex, dynamic problems in general present a challenge for the design of analysis support systems and tools largely because there is limited reliable a priori procedural knowledge descriptive of the dynamic processes in the environment. Problem domains that are non-cooperative or adversarial impute added difficulties involving suboptimal observational data and/or data containing the effects of deception or covertness. The fundamental nature of analysis in these environments is based on composite approaches involving mining or foraging over the evidence, discovery and learning processes, and the synthesis of fragmented hypotheses; together, these can be labeled as sensemaking procedures. This paper reviews and analyzes the features, benefits, and limitations of a variety of automated techniques that offer possible support to sensemaking processes in these problem domains.
Genetics Home Reference: familial thoracic aortic aneurysm and dissection
... and dissection ( familial TAAD ) involves problems with the aorta , which is the large blood vessel that distributes ... Familial TAAD affects the upper part of the aorta, near the heart. This part of the aorta ...
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Multiple cranial neuropathy: a common diagnostic problem.
Garg, R K; Karak, B
1999-10-01
Syndrome of multiple cranial palsies is a common clinical problem routinely encountered in neurological practice. Anatomical patterns of cranial nerves involvement help in localizing the lesion. Various infections, malignant neoplasms and autoimmune vasculitis are common disorders leading to various syndromes of multiple cranial nerve palsies. A large number of diffuse neurological disorders (e.g. Gullian-Barre syndrome, myopathies) may also present with syndrome of multiple cranial nerve palsies. Despite extensive biochemical and radiological work-up the accurate diagnosis may not be established. Few such patients represent "idiopathic" variety of multiple cranial nerve involvement and show good response to corticosteroids. Widespread and sequential involvements of cranial nerves frequently suggest possibility of malignant infiltration of meninges, however, confirmation of diagnosis may not be possible before autopsy.
The neural bases of the multiplication problem-size effect across countries
Prado, Jérôme; Lu, Jiayan; Liu, Li; Dong, Qi; Zhou, Xinlin; Booth, James R.
2013-01-01
Multiplication problems involving large numbers (e.g., 9 × 8) are more difficult to solve than problems involving small numbers (e.g., 2 × 3). Behavioral research indicates that this problem-size effect might be due to different factors across countries and educational systems. However, there is no neuroimaging evidence supporting this hypothesis. Here, we compared the neural correlates of the multiplication problem-size effect in adults educated in China and the United States. We found a greater neural problem-size effect in Chinese than American participants in bilateral superior temporal regions associated with phonological processing. However, we found a greater neural problem-size effect in American than Chinese participants in right intra-parietal sulcus (IPS) associated with calculation procedures. Therefore, while the multiplication problem-size effect might be a verbal retrieval effect in Chinese as compared to American participants, it may instead stem from the use of calculation procedures in American as compared to Chinese participants. Our results indicate that differences in educational practices might affect the neural bases of symbolic arithmetic. PMID:23717274
Improving insight and non-insight problem solving with brief interventions.
Wen, Ming-Ching; Butler, Laurie T; Koutstaal, Wilma
2013-02-01
Developing brief training interventions that benefit different forms of problem solving is challenging. In earlier research, Chrysikou (2006) showed that engaging in a task requiring generation of alternative uses of common objects improved subsequent insight problem solving. These benefits were attributed to a form of implicit transfer of processing involving enhanced construction of impromptu, on-the-spot or 'ad hoc' goal-directed categorizations of the problem elements. Following this, it is predicted that the alternative uses exercise should benefit abilities that govern goal-directed behaviour, such as fluid intelligence and executive functions. Similarly, an indirect intervention - self-affirmation (SA) - that has been shown to enhance cognitive and executive performance after self-regulation challenge and when under stereotype threat, may also increase adaptive goal-directed thinking and likewise should bolster problem-solving performance. In Experiment 1, brief single-session interventions, involving either alternative uses generation or SA, significantly enhanced both subsequent insight and visual-spatial fluid reasoning problem solving. In Experiment 2, we replicated the finding of benefits of both alternative uses generation and SA on subsequent insight problem-solving performance, and demonstrated that the underlying mechanism likely involves improved executive functioning. Even brief cognitive- and social-psychological interventions may substantially bolster different types of problem solving and may exert largely similar facilitatory effects on goal-directed behaviours. © 2012 The British Psychological Society.
Direct heuristic dynamic programming for damping oscillations in a large power system.
Lu, Chao; Si, Jennie; Xie, Xiaorong
2008-08-01
This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.
Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard
2013-04-01
The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.
Equation solvers for distributed-memory computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.
Martinez, Haley S; Klanecky, Alicia K; McChargue, Dennis E
2018-02-06
Scarce research has examined the combined effect of mental health difficulties and demographic risk factors such as freshman status and Greek affiliation in understanding college problem drinking. The current study is interested in looking at the interaction among freshman status, Greek affiliation, and mental health difficulties. Undergraduate students (N = 413) from a private and public Midwestern university completed a large online survey battery between January 2009 and April 2013. Data from both schools were aggregated for the analyses. After accounting for gender, age, and school type, the three-way interaction indicated that the highest drinking levels were reported in freshman students who reported a history of mental health problems although were not involved in Greek life. Findings are discussed in the context of perceived social norms, as well as alcohol-related screenings and intervention opportunities on college campuses.
ERIC Educational Resources Information Center
Rollings, Meda Janeen
2010-01-01
The study addressed the problem of campus safety and the extent to which faculty and administrators are aware of institutional security policies. Further, the research compared perceptions of administrators and faculty regarding faculty awareness of and involvement in campus safety policy initiatives. The research sought to determine if the…
Wind-induced vibration of stay cables
DOT National Transportation Integrated Search
2007-08-01
Cable-stayed bridges have become the form of choice over the past several decades for bridges in the medium- to long-span range. In some cases, serviceability problems involving large amplitude vibrations of stay cables under certain wind and rain co...
Boeninger, Daria K.; Masyn, Katherine E.; Conger, Rand D.
2012-01-01
Although studies have established associations between parenting characteristics and adolescent suicidality, the strength of the evidence for these links remains unclear, largely because of methodological limitations, including lack of accounting for possible child effects on parenting. This study addresses these issues by using autoregressive cross-lag models with data on 802 adolescents and their parents across 5 years. Observed parenting behaviors predicted change in adolescent suicidal problems across one-year intervals even after controlling for adolescents’ effects on parenting. Nurturant-involved parenting continued to demonstrate salutary effects after controlling for adolescent and parent internalizing psychopathology: over time, observed nurturant-involved parenting reduced the likelihood of adolescent suicidal problems. This study increases the empirical support implicating parenting behaviors in the developmental course of adolescent suicidality. PMID:24244079
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1982-01-01
The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.
Numerical Optimization Algorithms and Software for Systems Biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Solving Power Tool Problems in the School Shop
ERIC Educational Resources Information Center
Irvin, Daniel W.
1976-01-01
The school shop instructor is largely responsible for the preventive maintenance of power tools. These preventive measures primarily involve proper alignment, good lubrication, a reasonable maintenance program, and good operating procedures. Suggestions for maintenance of specific equipment is provided. (Author/BP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
NASA Technical Reports Server (NTRS)
Valinia, Azita; Moe, Rud; Seery, Bernard D.; Mankins, John C.
2013-01-01
We present a concept for an ISS-based optical system assembly demonstration designed to advance technologies related to future large in-space optical facilities deployment, including space solar power collectors and large-aperture astronomy telescopes. The large solar power collector problem is not unlike the large astronomical telescope problem, but at least conceptually it should be easier in principle, given the tolerances involved. We strive in this application to leverage heavily the work done on the NASA Optical Testbed Integration on ISS Experiment (OpTIIX) effort to erect a 1.5 m imaging telescope on the International Space Station (ISS). Specifically, we examine a robotic assembly sequence for constructing a large (meter diameter) slightly aspheric or spherical primary reflector, comprised of hexagonal mirror segments affixed to a lightweight rigidizing backplane structure. This approach, together with a structured robot assembler, will be shown to be scalable to the area and areal densities required for large-scale solar concentrator arrays.
Engineering data management: Experience and projections
NASA Technical Reports Server (NTRS)
Jefferson, D. K.; Thomson, B.
1978-01-01
Experiences in developing a large engineering data management system are described. Problems which were encountered are presented and projected to future systems. Business applications involving similar types of data bases are described. A data base management system architecture proposed by the business community is described and its applicability to engineering data management is discussed. It is concluded that the most difficult problems faced in engineering and business data management can best be solved by cooperative efforts.
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Levine, H.
1975-01-01
The theoretical analysis background for the STARS-2P nonlinear inelastic program is discussed. The theory involved is amenable for the analysis of large deflection inelastic behavior in axisymmetric shells of revolution subjected to axisymmetric loadings. The analysis is capable of considering such effects as those involved in nonproportional and cyclic loading conditions. The following are also discussed: orthotropic nonlinear kinematic hardening theory; shell wall cross sections and discrete ring stiffeners; the coupled axisymmetric large deflection elasto-plastic torsion problem; and the provision for the inelastic treatment of smeared stiffeners, isogrid, and waffle wall constructions.
Control and structural optimization for maneuvering large spacecraft
NASA Technical Reports Server (NTRS)
Chun, H. M.; Turner, J. D.; Yu, C. C.
1990-01-01
Presented here are the results of an advanced control design as well as a discussion of the requirements for automating both the structures and control design efforts for maneuvering a large spacecraft. The advanced control application addresses a general three dimensional slewing problem, and is applied to a large geostationary platform. The platform consists of two flexible antennas attached to the ends of a flexible truss. The control strategy involves an open-loop rigid body control profile which is derived from a nonlinear optimal control problem and provides the main control effort. A perturbation feedback control reduces the response due to the flexibility of the structure. Results are shown which demonstrate the usefulness of the approach. Software issues are considered for developing an integrated structures and control design environment.
Studying marine stratus with large eddy simulation
NASA Technical Reports Server (NTRS)
Moeng, Chin-Hoh
1990-01-01
Data sets from field experiments over the stratocumulus regime may include complications from larger scale variations, decoupled cloud layers, diurnal cycle, or entrainment instability, etc. On top of the already complicated turbulence-radiation-condensation processes within the cloud-topped boundary layer (CTBL), these complexities may sometimes make interpretation of the data sets difficult. To study these processes, a better understanding is needed of the basic processes involved in the prototype CTBL. For example, is cloud top radiative cooling the primary source of the turbulent kinetic energy (TKE) within the CTBL. Historically, laboratory measurements have played an important role in addressing the turbulence problems. The CTBL is a turbulent field which is probably impossible to generate in laboratories. Large eddy simulation (LES) is an alternative way of 'measuring' the turbulent structure under controlled environments, which allows the systematic examination of the basic physical processes involved. However, there are problems with the LES approach for the CTBL. The LES data need to be consistent with the observed data. The LES approach is discussed, and results are given which provide some insights into the simulated turbulent flow field. Problems with this approach for the CTBL and information from the FIRE experiment needed to justify the LES results are discussed.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less
Some Issues in Programming Multi-Mini-Processors
1975-01-01
Hardware ^nd software are to be combined optimally to perform that specialized task. This in essence is the stategy followed by the BBN group in...large memory is directly addressable. MIXED SOLUTIONS The most promising approach appears to involve mixing several of the previous solutions...mini- or micro-computers. Possibly the problem will be solved by avoiding it. Some new minis are appearing on the market now with large physical
NASA Technical Reports Server (NTRS)
Basili, V. R.; Zelkowitz, M. V.
1978-01-01
In a brief evaluation of software-related considerations, it is found that suitable approaches for software development depend to a large degree on the characteristics of the particular project involved. An analysis is conducted of development problems in an environment in which ground support software is produced for spacecraft control. The amount of work involved is in the range from 6 to 10 man-years. Attention is given to a general project summary, a programmer/analyst survey, a component summary, a component status report, a resource summary, a change report, a computer program run analysis, aspects of data collection on a smaller scale, progress forecasting, problems of overhead, and error analysis.
Large Nc equivalence and baryons
NASA Astrophysics Data System (ADS)
Blake, Mike; Cherman, Aleksey
2012-09-01
In the large Nc limit, gauge theories with different gauge groups and matter content sometimes turn out to be “large Nc equivalent,” in the sense of having a set of coincident correlation functions. Large Nc equivalence has mainly been explored in the glueball and meson sectors. However, a recent proposal to dodge the fermion sign problem of QCD with a quark number chemical potential using large Nc equivalence motivates investigating the applicability of large Nc equivalence to correlation functions involving baryon operators. Here we present evidence that large Nc equivalence extends to the baryon sector, under the same type of symmetry realization assumptions as in the meson sector, by adapting the classic Witten analysis of large Nc baryons.
Miller, Helen E; Thomas, Samantha L; Robinson, Priscilla
2018-04-06
Previous research has shown that government and industry discussions of gambling may focus on personal responsibility for gambling harm. In Australia, these discussions have largely excluded people with lived experience of problem gambling, including those involved in peer support and advocacy. We conducted 26 in-depth interviews with people with current or previous problem gambling on electronic gaming machines (EGMs) involved in peer support and advocacy activities, using an approach informed by Interpretive Policy Analysis and Constructivist Grounded Theory. Participants perceived that government and industry discussed gambling as safe and entertaining with a focus on personal responsibility for problem gambling. This focus on personal responsibility was perceived to increase stigma associated with problem gambling. In contrast, they described gambling as risky, addictive and harmful, with problem gambling resulting from the design of EGMs. As a result of their different perspectives, participants proposed different interventions to reduce gambling harm, including reducing accessibility and making products safer. Challenging the discourses used by governments and industry to describe gambling, using the lived experience of people with experience of gambling harm, may result in reduced stigma associated with problem gambling, and more effective public policy approaches to reducing harm.
Analytic Theory and Control of the Motion of Spinning Rigid Bodies
NASA Technical Reports Server (NTRS)
Tsiotras, Panagiotis
1993-01-01
Numerical simulations are often resorted to, in order to understand the attitude response and control characteristics of a rigid body. However, this approach in performing sensitivity and/or error analyses may be prohibitively expensive and time consuming, especially when a large number of problem parameters are involved. Thus, there is an important role for analytical models in obtaining an understanding of the complex dynamical behavior. In this dissertation, new analytic solutions are derived for the complete attitude motion of spinning rigid bodies, under minimal assumptions. Hence, we obtain the most general solutions reported in the literature so far. Specifically, large external torques and large asymmetries are included in the problem statement. Moreover, problems involving large angular excursions are treated in detail. A new tractable formulation of the kinematics is introduced which proves to be extremely helpful in the search for analytic solutions of the attitude history of such kinds of problems. The main utility of the new formulation becomes apparent however, when searching for feedback control laws for stabilization and/or reorientation of spinning spacecraft. This is an inherently nonlinear problem, where standard linear control techniques fail. We derive a class of control laws for spin axis stabilization of symmetric spacecraft using only two pairs of gas jet actuators. Practically, this could correspond to a spacecraft operating in failure mode, for example. Theoretically, it is also an important control problem which, because of its difficulty, has received little, if any, attention in the literature. The proposed control laws are especially simple and elegant. A feedback control law that achieves arbitrary reorientation of the spacecraft is also derived, using ideas from invariant manifold theory. The significance of this research is twofold. First, it provides a deeper understanding of the fundamental behavior of rigid bodies subject to body-fixed torques. Assessment of the analytic solutions reveals that they are very accurate; for symmetric bodies the solutions of Euler's equations of motion are, in fact, exact. Second, the results of this research have a fundamental impact on practical scientific and mechanical applications in terms of the analysis and control of all finite-sized rigid bodies ranging from nanomachines to very large bodies, both man made and natural. After all, Euler's equations of motion apply to all physical bodies, barring only the extreme limits of quantum mechanics and relativity.
Publication Bias in Special Education Meta-Analyses
ERIC Educational Resources Information Center
Gage, Nicholas A.; Cook, Bryan G.; Reichow, Brian
2017-01-01
Publication bias involves the disproportionate representation of studies with large and significant effects in the published research. Among other problems, publication bias results in inflated omnibus effect sizes in meta-analyses, giving the impression that interventions have stronger effects than they actually do. Although evidence suggests…
A Problem Solving Active-Learning Course in Pharmacotherapy.
ERIC Educational Resources Information Center
Delafuente, Jeffrey C.; And Others
1994-01-01
A third-year pharmacology course in a doctoral pharmacy program that is case based and intended for a large class is described. Aspects discussed include learning objectives, course organization, classroom activities, case selection and design, faculty involvement, grading, and areas identified for improvement. (MSE)
Institutional Structure: An Impediment to Professionalism.
ERIC Educational Resources Information Center
Palardy, J. Michael.
1988-01-01
Large schools have a tall organizational structure with long chains of command and limited control for "low-level" staff, including teachers and principals. To resolve this problem, two alternative structures are suggested: a dual structure involving spheres of administrative and professional responsibility and a flat structure featuring…
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
Patients subject to high levels of coercion: staff's understanding.
Bowers, Len; Wright, Steve; Stewart, Duncan
2014-05-01
Measures to keep staff and patients safe (containment) frequently involve coercion. A small proportion of patients is subject to a large proportion of containment use. To reduce the use of containment, we need a better understanding of the circumstances in which it is used and the understandings of patients and staff. Two sweeps were made of all the wards, spread over four hospital sites, in one large London mental health organization to identify patients who had been subject to high levels of containment in the previous two weeks. Data were then extracted from their case notes about their past history, current problem behaviours, and how they were understood by the patients involved and the staff. Nurses and consultant psychiatrists were interviewed to supplement the information from the case records. Twenty-six heterogeneous patients were identified, with many ages, genders, diagnoses, and psychiatric specialities represented. The main problem behaviours giving rise to containment use were violence and self-harm. The roots of the problem behaviours were to be found in severe psychiatric symptoms, cognitive difficulties, personality traits, and the implementation of the internal structure of the ward by staff. Staff's range and depth of understandings was limited and did not include functional analysis, defence mechanisms, specific cognitive assessment, and other potential frameworks. There is a need for more in-depth assessment and understanding of patients' problems, which may lead to additional ways to reduce containment use.
Object Transportation by Two Mobile Robots with Hand Carts
Hara, Tatsunori
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499
Object Transportation by Two Mobile Robots with Hand Carts.
Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement.
On-Line Systems: Promise and Pitfalls
ERIC Educational Resources Information Center
Cuadra, Carlos A.
1971-01-01
The virtues of interactive systems are speed, intimacy, and - if time-sharing is involved - economy. The major problems are the cost of the large computers and files necessary for bibliographic data, the still-high cost of communications, and the generally poor design of the user-system interfaces. (Author)
NASA Technical Reports Server (NTRS)
Housner, J. M.; Mcgowan, P. E.; Abrahamson, A. L.; Powell, M. G.
1986-01-01
The LATDYN User's Manual presents the capabilities and instructions for the LATDYN (Large Angle Transient DYNamics) computer program. The LATDYN program is a tool for analyzing the controlled or uncontrolled dynamic transient behavior of interconnected deformable multi-body systems which can undergo large angular motions of each body relative other bodies. The program accommodates large structural deformation as well as large rigid body rotations and is applicable, but not limited to, the following areas: (1) development of large flexible space structures; (2) slewing of large space structure components; (3) mechanisms with rigid or elastic components; and (4) robotic manipulations of beam members. Presently the program is limited to two dimensional problems, but in many cases, three dimensional problems can be exactly or approximately reduced to two dimensions. The program uses convected finite elements to affect the large angular motions involved in the analysis. General geometry is permitted. Detailed user input and output specifications are provided and discussed with example runstreams. To date, LATDYN has been configured for CDC/NOS and DEC VAX/VMS machines. All coding is in ANSII-77 FORTRAN. Detailed instructions regarding interfaces with particular computer operating systems and file structures are provided.
Large Osteoarthritic Cyst Presenting as Soft Tissue Tumour – A Case Report
Kosuge, DD; Park, DH; Cannon, SR; Briggs, TW; Pollock, RC; Skinner, JA
2007-01-01
Large osteoarthritic cysts can sometimes be difficult to distinguish from primary osseous and soft tissue tumours. We present such a case involving a cyst arising from the hip joint and eroding the acetabulum which presented as a soft tissue malignancy referred to a tertiary bone and soft tissue tumour centre. We discuss the diagnostic problems it may pose, and present a literature review of the subject. PMID:17535605
History-Dependent Problems with Applications to Contact Models for Elastic Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartosz, Krzysztof; Kalita, Piotr; Migórski, Stanisław
We prove an existence and uniqueness result for a class of subdifferential inclusions which involve a history-dependent operator. Then we specialize this result in the study of a class of history-dependent hemivariational inequalities. Problems of such kind arise in a large number of mathematical models which describe quasistatic processes of contact. To provide an example we consider an elastic beam in contact with a reactive obstacle. The contact is modeled with a new and nonstandard condition which involves both the subdifferential of a nonconvex and nonsmooth function and a Volterra-type integral term. We derive a variational formulation of the problemmore » which is in the form of a history-dependent hemivariational inequality for the displacement field. Then, we use our abstract result to prove its unique weak solvability. Finally, we consider a numerical approximation of the model, solve effectively the approximate problems and provide numerical simulations.« less
Powell, Douglas R; Son, Seung-Hee; File, Nancy; San Juan, Robert R
2010-08-01
Two dimensions of parent-school relationships, parental school involvement and parents' perceptions of teacher responsiveness to child/parent, were examined in state-funded pre-kindergarten classrooms in a large urban school district. Children's social and academic outcomes were individually assessed in the fall and spring. Hierarchical Linear Modeling analyses revealed that parental school involvement positively predicted children's social skills (d=.55) and mathematics skills (d=.36), and negatively predicted problem behaviors (d=.47). Perceived teacher responsiveness to child/parent was positively related to children's early reading (d=.43), and social skills (d=.43), and negatively to problem behaviors (d=.61). All analyses controlled for quality of teacher interaction with children in the classroom, parental home involvement, parental education level, and child race/ethnicity. Copyright 2010 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Childhood antecedents of incarceration and criminal justice involvement among homeless veterans.
Tsai, Jack; Rosenheck, Robert A
2013-10-01
Although criminal justice involvement and incarceration are common problems for homeless veterans, few studies have examined childhood risk factors for criminal justice involvement among veterans. This study examined the association between three types of childhood problems, family instability, conduct disorder behaviors, and childhood abuse, and criminal justice involvement and incarceration in adulthood. Data from 1,161 homeless veterans across 19 sites participating in the Housing and Urban Development-Veterans Affairs Supportive Housing program were examined. After controlling for sociodemographics and mental health diagnoses, veterans who reported more conduct disorder behaviors during childhood tended to report more criminal charges of all types, more convictions, and longer periods of incarceration during adulthood. However, the variance explained in criminal behavior by childhood was not large, suggesting that there are other factors that affect the trajectory by which homeless veterans become involved in the criminal justice system. Further research is needed to intervene in the pathway to the criminal justice system and guide efforts to prevent incarceration among veterans. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Dependency visualization for complex system understanding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, J. Allison Cory
1994-09-01
With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less
PROBLEM OF FORMING IN A MAN-OPERATOR A HABIT OF TRACKING A MOVING TARGET,
Cybernetics stimulated the large-scale use of the method of functional analogy which makes it possible to compare technical and human activity systems...interesting and highly efficient human activity because of the psychological control factor involved in its operation. The human tracking system is
A Comparison of Missing-Data Procedures for Arima Time-Series Analysis
ERIC Educational Resources Information Center
Velicer, Wayne F.; Colby, Suzanne M.
2005-01-01
Missing data are a common practical problem for longitudinal designs. Time-series analysis is a longitudinal method that involves a large number of observations on a single unit. Four different missing-data methods (deletion, mean substitution, mean of adjacent observations, and maximum likelihood estimation) were evaluated. Computer-generated…
Electronic Journal Delivery in Academic Libraries
ERIC Educational Resources Information Center
Crothers, Stephen; Prabhu, Margaret; Sullivan, Shirley
2007-01-01
The authors recount experiences of the variety of problems and issues involved in providing access to electronic journals in a large academic library. The paper excludes concerns emanating from decisions to subscribe to aggregations such as those produced by vendors like EBSCO, but concentrates on scholarly journals ordered individually, or as…
ERIC Educational Resources Information Center
Bork, Alfred M.
An introduction to the problems involved in conversion of computer dialogues from one computer language to another is presented. Conversion of individual dialogues by complete rewriting is straightforward, if tedious. To make a general conversion of a large group of heterogeneous dialogue material from one language to another at one step is more…
NASA Technical Reports Server (NTRS)
Wegener, P. P.
1980-01-01
A cryogenic wind tunnel is based on the twofold idea of lowering drive power and increasing Reynolds number by operating with nitrogen near its boiling point. There are two possible types of condensation problems involved in this mode of wind tunnel operation. They concern the expansion from the nozzle supply to the test section at relatively low cooling rates, and secondly the expansion around models in the test section. This secondary expansion involves higher cooling rates and shorter time scales. In addition to these two condensation problems it is not certain what purity of nitrogen can be achieved in a large facility. Therefore, one cannot rule out condensation processes other than those of homogeneous nucleation.
Lang, E; Mattson, M
1985-01-01
A structured, goal-oriented format for enhancing the involvement of activity therapy disciplines in the multidisciplinary treatment planning process has been developed in a large private psychiatric teaching hospital. The format, an adaptation of the problem-oriented record, encompasses formal procedures for identifying and recording relevant problems, goals, methods, and objectives for activity therapy treatment. The benefits of this approach include the development of specific, measurable, attainable functional goals; increased accountability in treatment planning and delivery; less time spent in documentation; and education of other staff about the role and function of activities therapy. Patients have a better understanding of their goals and the steps needed to achieve them and show increased participation in the therapy process.
An information geometric approach to least squares minimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark; Machta, Benjamin; Sethna, James
2009-03-01
Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.
NASA Technical Reports Server (NTRS)
Macneal, R. H.; Harder, R. L.; Mason, J. B.
1973-01-01
A development for NASTRAN which facilitates the analysis of structures made up of identical segments symmetrically arranged with respect to an axis is described. The key operation in the method is the transformation of the degrees of freedom for the structure into uncoupled symmetrical components, thereby greatly reducing the number of equations which are solved simultaneously. A further reduction occurs if each segment has a plane of reflective symmetry. The only required assumption is that the problem be linear. The capability, as developed, will be available in level 16 of NASTRAN for static stress analysis, steady state heat transfer analysis, and vibration analysis. The paper includes a discussion of the theory, a brief description of the data supplied by the user, and the results obtained for two example problems. The first problem concerns the acoustic modes of a long prismatic cavity imbedded in the propellant grain of a solid rocket motor. The second problem involves the deformations of a large space antenna. The latter example is the first application of the NASTRAN Cyclic Symmetry capability to a really large problem.
Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3
NASA Technical Reports Server (NTRS)
Shivarama, Ravishankar; Fahrenthold, Eric P.
2004-01-01
A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.
On the decay of solutions to the 2D Neumann exterior problem for the wave equation
NASA Astrophysics Data System (ADS)
Secchi, Paolo; Shibata, Yoshihiro
We consider the exterior problem in the plane for the wave equation with a Neumann boundary condition and study the asymptotic behavior of the solution for large times. For possible application we are interested to show a decay estimate which does not involve weighted norms of the initial data. In the paper we prove such an estimate, by a combination of the estimate of the local energy decay and decay estimates for the free space solution.
Robust Radio Broadcast Monitoring Using a Multi-Band Spectral Entropy Signature
NASA Astrophysics Data System (ADS)
Camarena-Ibarrola, Antonio; Chávez, Edgar; Tellez, Eric Sadit
Monitoring media broadcast content has deserved a lot of attention lately from both academy and industry due to the technical challenge involved and its economic importance (e.g. in advertising). The problem pose a unique challenge from the pattern recognition point of view because a very high recognition rate is needed under non ideal conditions. The problem consist in comparing a small audio sequence (the commercial ad) with a large audio stream (the broadcast) searching for matches.
Klegeris, Andis; Hurren, Heather
2011-12-01
Problem-based learning (PBL) can be described as a learning environment where the problem drives the learning. This technique usually involves learning in small groups, which are supervised by tutors. It is becoming evident that PBL in a small-group setting has a robust positive effect on student learning and skills, including better problem-solving skills and an increase in overall motivation. However, very little research has been done on the educational benefits of PBL in a large classroom setting. Here, we describe a PBL approach (using tutorless groups) that was introduced as a supplement to standard didactic lectures in University of British Columbia Okanagan undergraduate biochemistry classes consisting of 45-85 students. PBL was chosen as an effective method to assist students in learning biochemical and physiological processes. By monitoring student attendance and using informal and formal surveys, we demonstrated that PBL has a significant positive impact on student motivation to attend and participate in the course work. Student responses indicated that PBL is superior to traditional lecture format with regard to the understanding of course content and retention of information. We also demonstrated that student problem-solving skills are significantly improved, but additional controlled studies are needed to determine how much PBL exercises contribute to this improvement. These preliminary data indicated several positive outcomes of using PBL in a large classroom setting, although further studies aimed at assessing student learning are needed to further justify implementation of this technique in courses delivered to large undergraduate classes.
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-09
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-01
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661
NASA Astrophysics Data System (ADS)
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-01
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442
Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R
2017-01-01
We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
Explicit solution techniques for impact with contact constraints
NASA Technical Reports Server (NTRS)
Mccarty, Robert E.
1993-01-01
Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.
Explicit solution techniques for impact with contact constraints
NASA Astrophysics Data System (ADS)
McCarty, Robert E.
1993-08-01
Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.
Control of Flexible Structures (COFS) Flight Experiment Background and Description
NASA Technical Reports Server (NTRS)
Hanks, B. R.
1985-01-01
A fundamental problem in designing and delivering large space structures to orbit is to provide sufficient structural stiffness and static configuration precision to meet performance requirements. These requirements are directly related to control requirements and the degree of control system sophistication available to supplement the as-built structure. Background and rationale are presented for a research study in structures, structural dynamics, and controls using a relatively large, flexible beam as a focus. This experiment would address fundamental problems applicable to large, flexible space structures in general and would involve a combination of ground tests, flight behavior prediction, and instrumented orbital tests. Intended to be multidisciplinary but basic within each discipline, the experiment should provide improved understanding and confidence in making design trades between structural conservatism and control system sophistication for meeting static shape and dynamic response/stability requirements. Quantitative results should be obtained for use in improving the validity of ground tests for verifying flight performance analyses.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is addressed. Leading eigenvalues of large matrices that arise from discretization are calculated, and an efficient multigrid method for solving these problems is presented. The resulting grid functions are used as initial approximations for appropriate eigenvalue problems. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a nonstandard way in which the right-hand side of the coarse grid equations involves unknown parameters to be solved on the coarse grid. This leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem are presented which demonstrate the effectiveness of the method.
Equity from a Large City Director's Perspective. Research and Development Series No. 214P.
ERIC Educational Resources Information Center
Campbell-Thrane, Lucille
Equity in vocational education cannot be addressed until the question of urban cultural pluralism has been fully analyzed. This question involves problems of minorities, the disadvantaged, and those with limited English proficiency. Barriers facing urban youths enrolling in vocational education include close-knit ethnic pockets attempting to…
Possibilities of Particle Finite Element Methods in Industrial Forming Processes
NASA Astrophysics Data System (ADS)
Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.
2007-04-01
The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.
Waves of Hope: The U.S. Navy’s Response to the Tsunami in Northern Indonesia
2007-02-01
mountain of rice, instant noodles , and crackers sat waiting on the airfield, their delivery hampered by the small size of the airport and limited...Miscommunication and rumor were still rampant. One incident that exemplifies this problem involved a large box of dried noodles that accidentally fell
Understanding Mathematics: Some Key Factors
ERIC Educational Resources Information Center
Ali, Asma Amanat; Reid, Norman
2012-01-01
Mathematics is well known as a subject area where there can be problems in terms of understanding as well as retaining positive attitudes. In a large study involving 813 school students (ages approximately 10-12) drawn from two different school systems in Pakistan, the effect of limited working memory capacity on performance in mathematics was…
Sparse matrix methods based on orthogonality and conjugacy
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1973-01-01
A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.
For Mole Problems, Call Avogadro: 602-1023.
ERIC Educational Resources Information Center
Uthe, R. E.
2002-01-01
Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…
A Convenient Storage Rack for Graduated Cylinders
ERIC Educational Resources Information Center
Love, Brian
2004-01-01
An attempt is made to find a solution to the occasional problem of a need for storing large numbers of graduated cylinders in many teaching and research laboratories. A design, which involves the creation of a series of parallel channels that are used to suspend inverted graduated cylinders by their bases, is proposed.
Supporting Beginning Rural Teachers: Lessons from Successful Schools
ERIC Educational Resources Information Center
White, Simone; Lock, Graeme; Hastings, Wendy; Reid, Jo-Anne; Green, Bill; Cooper, Maxine
2009-01-01
Across Australia and internationally, the vexed problem of staffing rural school remains a major issue affecting the educational outcomes of many rural students and their families. TERRAnova, (New Ground in Teacher Education for Rural and Regional Australia), is the name of a large Australian Research Council funded (2008-2010) project involving:…
Single-Parent Families: Policy Recommendations for Child Support.
ERIC Educational Resources Information Center
Haskins, Ron; And Others
Results of a review of problems associated with divorce indicate that not only are very large numbers of children involved, but divorce seems to be associated with serious effects for children and adults. A very substantial number of children of divorced parents live in poverty and nearly all experience substantial reductions in family income. One…
Decision net, directed graph, and neural net processing of imaging spectrometer data
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki; Barnard, Etienne
1989-01-01
A decision-net solution involving a novel hierarchical classifier and a set of multiple directed graphs, as well as a neural-net solution, are respectively presented for large-class problem and mixture problem treatments of imaging spectrometer data. The clustering method for hierarchical classifier design, when used with multiple directed graphs, yields an efficient decision net. New directed-graph rules for reducing local maxima as well as the number of perturbations required, and the new starting-node rules for extending the reachability and reducing the search time of the graphs, are noted to yield superior results, as indicated by an illustrative 500-class imaging spectrometer problem.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
Towards a Methodology for Identifying Program Constraints During Requirements Analysis
NASA Technical Reports Server (NTRS)
Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo
1997-01-01
Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.
Using sound to solve syntactic problems: the role of phonology in grammatical category assignments.
Kelly, M H
1992-04-01
One ubiquitous problem in language processing involves the assignment of words to the correct grammatical category, such as noun or verb. In general, semantic and syntactic cues have been cited as the principal information for grammatical category assignment, to the neglect of possible phonological cues. This neglect is unwarranted, and the following claims are made: (a) Numerous correlations between phonology and grammatical class exist, (b) some of these correlations are large and can pervade the entire lexicon of a language and hence can involve thousands of words, (c) experiments have repeatedly found that adults and children have learned these correlations, and (d) explanations for how these correlations arose can be proposed and evaluated. Implications of these phenomena for language representation and processing are discussed.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Kunst, Maarten; Van Wilsem, Johan
2013-05-01
Violent crime victimization can have serious mental health consequences, but what it is that makes victims at risk of mental health problems or delayed recovery from such problems is largely unknown. Previous research has focused on, amongst other things, the disabling impact of personality factors involved in the regulation of emotions. Using data from the Dutch Longitudinal Internet Studies for the Social Sciences (LISS) panel (n = 2628), this study explored whether the association between violent crime victimization and change in mental health problems over a 1-year time span also varies by trait impulsivity (TI)--a personality factor involved in regulating behavior. TI may serve as a risk factor for mental health problems, but research into this topic is scarce and inconsistent. Results suggested that low TI subjects are prone to experience an increase in mental health problems following victimization. As a possible explanation for this finding, it was speculated that subjects with low TI do not perceive themselves at risk of victimization and thus see this positive assumption shattered when victimization does occur. Results were further discussed in terms of study limitations and strengths and implications for future research.
NASA Astrophysics Data System (ADS)
Maries, Alexandru; Singh, Chandralekha
2018-06-01
Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i) asked to solve problems in which the diagrams were drawn for them or (ii) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere) and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.
A new asymptotic method for jump phenomena
NASA Technical Reports Server (NTRS)
Reiss, E. L.
1980-01-01
Physical phenomena involving rapid and sudden transitions, such as snap buckling of elastic shells, explosions, and earthquakes, are characterized mathematically as a small disturbance causing a large-amplitude response. Because of this, standard asymptotic and perturbation methods are ill-suited to these problems. In the present paper, a new method of analyzing jump phenomena is proposed. The principal feature of the method is the representation of the response in terms of rational functions. For illustration, the method is applied to the snap buckling of an elastic arch and to a simple combustion problem.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
Trinification, the hierarchy problem, and inverse seesaw neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cauet, Christophe; Paes, Heinrich; Wiesenfeldt, Soeren
2011-05-01
In minimal trinification models light neutrino masses can be generated via a radiative seesaw mechanism, where the masses of the right-handed neutrinos originate from loops involving Higgs and fermion fields at the unification scale. This mechanism is absent in models aiming at solving or ameliorating the hierarchy problem, such as low-energy supersymmetry, since the large seesaw scale disappears. In this case, neutrino masses need to be generated via a TeV-scale mechanism. In this paper, we investigate an inverse seesaw mechanism and discuss some phenomenological consequences.
Une formulation variationnelle du problème de contact avec frottement de Coulomb
NASA Astrophysics Data System (ADS)
Le van, Anh; Nguyen, Tai H. T.
2008-07-01
A variational relationship is proposed as the weak form of the large deformation contact problem with Coulomb friction. It is a mixed relationship involving both the displacements and the multipliers; the weighting functions are the virtual displacements and the virtual multipliers. It is shown that the proposed weak form is equivalent to the strong form of the initial/boundary value contact problem and the multipliers are equal to the contact tractions. To cite this article: A. Le van, T.H.T. Nguyen, C. R. Mecanique 336 (2008).
The Effect of Normalization in Violence Video Classification Performance
NASA Astrophysics Data System (ADS)
Ali, Ashikin; Senan, Norhalina
2017-08-01
Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
NASA aviation safety reporting system
NASA Technical Reports Server (NTRS)
1978-01-01
A sample of reports relating to operations during winter weather is presented. Several reports involving problems of judgment and decisionmaking have been selected from the numerous reports representative of this area. Problems related to aeronautical charts are discussed in a number of reports. An analytic study of reports involving potential conflicts in the immediate vicinity of uncontrolled airports was performed; the results are discussed in this report. It was found that in three-fourths of 127 such conflicts, neither pilot, or only one of the pilots, was communicating position and intentions on the appropriate frequency. The importance of providing aural transfer of information, as a backup to the visual see and avoid mode of information transfer is discussed. It was also found that a large fraction of pilots involved in potential conflicts on final approach had executed straight-in approaches, rather than the recommended traffic pattern entries, prior to the conflicts. A selection of alert bulletins and responses to them by various segments of the aviation community is presented.
Gambling in Sweden: the cultural and socio-political context.
Binde, Per
2014-02-01
To provide an overview, with respect to Sweden, of the cultural history of gambling, the commercialization of gambling, problem gambling research, the prevalence of problem gambling and its prevention and treatment. A review of the literature and official documents relating to gambling in Sweden; involvement in gambling research and regulation. Gambling has long been part of Swedish culture. Since about 1980 the gambling market, although still largely monopolistic, has been commercialized. At the same time, problem gambling has emerged as a concept in the public health paradigm. Debate regarding whether or not Sweden's national restrictions on the gambling market are compliant with European Community legislation has helped to put problem gambling on the political agenda. Despite expanded gambling services, the extent of problem gambling on the population level has not changed significantly over the past decade. The stability of problem gambling in Sweden at the population level suggests a homeostatic system involving the gambling market, regulation, prevention and treatment and adaption to risk and harm by gamblers. We have relatively good knowledge of the extent and characteristics of problem gambling in Sweden and of how to treat it, but little is known of how to prevent it effectively. Knowledge is needed of the effectiveness of regulatory actions and approaches, and of responsible gambling measures implemented by gambling companies. © 2013 The Author, Addiction © 2013 Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Morrev, P. G.; Gordon, V. A.
2018-03-01
Surface hardening by deep rolling can be considered as the axial symmetric problem in some special events (namely, when large R and small r radii of the deforming roller meet the requirement R>> r). An axisymmetric nodal averaged stabilized finite element is formulated. The formulation is based on a variational principle with a penalty (stabilizing) item in order to involve large elastic-plastic strain and near to incompressible materials. The deep rolling process for a steel rod is analyzed. Axial residual stress, yield stress, and Odkvist’s parameter are calculated. The residual stress is compared with the data obtained by other authors using a three-dimensional statement of the problem. The results obtained demonstrate essential advantages of the newly developed finite element.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
The Challenges, Rewards and Pitfalls in Teaching Online Searching.
ERIC Educational Resources Information Center
Huq, A. M. Abdul
Online searching is now a recognized specialization in library schools, but there are certain problems involved with offering online courses. There is a large amount of material to be covered, especially in one course. Students must learn new concepts and make them work in a machine environment. It is difficult to offer suggestions for a fruitful…
Attaining a steady air stream in wind tunnels
NASA Technical Reports Server (NTRS)
Prandtl, L
1933-01-01
Many experimental arrangements of varying kind involve the problems of assuring a large, steady air stream both as to volume and to time. For this reason a separate discussion of the methods by which this is achieved should prove of particular interest. Motors and blades receive special attention and a review of existent wind tunnels is also provided.
Sex Differences and the Factor of Time in Solving Vandenberg and Kuse Mental Rotation Problems
ERIC Educational Resources Information Center
Peters, M.
2005-01-01
In accounting for the well-established sex differences on mental rotation tasks that involve cube stimuli of the Shepard and Metzler (Shepard & Metzler, 1971) kind, performance factors are frequently invoked. Three studies are presented that examine performance factors. In Study 1, analyses of the performance of a large number of subjects…
ERIC Educational Resources Information Center
Grueneich, Royal; Trabasso, Tom
This review of research involving children's moral judgment of literature indicates that such research has been plagued by serious methodological problems stemming largely from the fact that the stimulus materials used to assess children's comprehension and evaluations have tended to be poorly constructed. It contends that this forces children to…
Bowel obstruction: Differential diagnosis and clinical management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welch, J.P.
1987-01-01
This book presents a practical guide to the diagnosis and management of obstruction, both mechanical and organic, of the large and small bowel. Obstruction is a common problem for surgeons, and this text emphasizes differential diagnosis and the use of all radiologic modalities. It presents the surgical and medical considerations involved with gallstones, bezoars, parasites, tumors, inflammation, trauma, intussusception, more.
AFRL/Cornell Information Assurance Institute
2007-03-01
revewing this colection ofinformation . Send connents regarding this burden estimate or any other aspect of this collection of information, indcudng...collabora- tions involving Cornell and AFRL researchers, with * AFRL researchers able to participate in Cornell research projects, fa- cilitating technology ...approach to developing a science base and technology for supporting large-scale reliable distributed systems. First, so- lutions to core problems were
Superpowers Influence in the Horn of Africa
1989-05-19
Greenfield: In its extent, its government and its problems, present day Ethiopia is largely the creaLion of the Emperor Menelik II . The process dating...conflict or cooperation between those involved during the next ten to fifteen years or so. 4 U / .- ii TABLE OF CONTENTS Page ABSTRACT... ii CHAPTER I. INTRODUCTION ..... ................. . . Survey of the Literature ..... ........... 3 Objective of the
de Vreede, Gert-Jan; Briggs, Robert O; Reiter-Palmon, Roni
2010-04-01
The aim of this study was to compare the results of two different modes of using multiple groups (instead of one large group) to identify problems and develop solutions. Many of the complex problems facing organizations today require the use of very large groups or collaborations of groups from multiple organizations. There are many logistical problems associated with the use of such large groups, including the ability to bring everyone together at the same time and location. A field study involved two different organizations and compared productivity and satisfaction of group. The approaches included (a) multiple small groups, each completing the entire process from start to end and combining the results at the end (parallel mode); and (b) multiple subgroups, each building on the work provided by previous subgroups (serial mode). Groups using the serial mode produced more elaborations compared with parallel groups, whereas parallel groups produced more unique ideas compared with serial groups. No significant differences were found related to satisfaction with process and outcomes between the two modes. Preferred mode depends on the type of task facing the group. Parallel groups are more suited for tasks for which a variety of new ideas are needed, whereas serial groups are best suited when elaboration and in-depth thinking on the solution are required. Results of this research can guide the development of facilitated sessions of large groups or "teams of teams."
Large Terrain Continuous Level of Detail 3D Visualization Tool
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan
2012-01-01
This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Tick-borne encephalitis carries a high risk of incomplete recovery in children.
Fowler, Åsa; Forsman, Lea; Eriksson, Margareta; Wickström, Ronny
2013-08-01
To examine long-term outcome after tick-borne encephalitis (TBE) in children. In this population-based cohort, 55 children with TBE with central nervous system involvement infected during 2004-2008 were evaluated 2-7 years later using the Rivermead post-concussion symptoms questionnaire (n = 42) and the Behavior Rating Inventory of Executive Functioning for parents and teachers (n = 32, n = 22, respectively). General cognitive ability was investigated in a subgroup (n = 20) using the Wechsler Intelligence Scale for Children, 4th edition. At long-term follow-up, two-thirds of the children experienced residual problems, the main complaints being cognitive problems, headache, fatigue, and irritability. More than one-third of the children were reported by parents or teachers to have problems with executive functioning on the Behavior Rating Inventory of Executive Functioning, mainly in areas involving initiating and organizing activities and working memory. Children who underwent Wechsler Intelligence Scale for Children, 4th edition testing had a significantly lower working memory index compared with reference norms. A large proportion of children experience an incomplete recovery after TBE with central nervous system involvement. Cognitive problems in areas of executive function and working memory are the most prevalent. Even if mortality and severe sequelae are low in children after TBE, all children should be followed after TBE to detect cognitive deficits. Copyright © 2013 Mosby, Inc. All rights reserved.
Application of computational aero-acoustics to real world problems
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.; McKee, P.; Corby, R.
2009-12-01
One of the large challenges in data assimilation (DA) into distributed hydrologic models is to reduce the large degrees of freedom involved in the inverse problem to avoid overfitting. To assess the sensitivity of the performance of DA to the dimensionality of the inverse problem, we design and carry out real-world experiments in which the control vector in variational DA (VAR) is solved at different scales in space and time, e.g., lumped, semi-distributed, and fully-distributed in space, and hourly, 6 hourly, etc., in time. The size of the control vector is related to the degrees of freedom in the inverse problem. For the assessment, we use the prototype 4-dimenational variational data assimilator (4DVAR) that assimilates streamflow, precipitation and potential evaporation data into the NWS Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM). In this talk, we present the initial results for a number of basins in Oklahoma and Texas.
Large-angle slewing maneuvers for flexible spacecraft
NASA Technical Reports Server (NTRS)
Chun, Hon M.; Turner, James D.
1988-01-01
A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.
Implementation of an effective hybrid GA for large-scale traveling salesman problems.
Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi
2007-02-01
This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Flight control systems properties and problems, volume 1
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Johnston, D. E.
1975-01-01
This volume contains a delineation of fundamental and mechanization-specific flight control characteristics and problems gleaned from many sources and spanning a period of over two decades. It is organized to present and discuss first some fundamental, generic problems of closed-loop flight control systems involving numerator characteristics (quadratic dipoles, non-minimum phase roots, and intentionally introduced zeros). Next the principal elements of the largely mechanical primary flight control system are reviewed with particular emphasis on the influence of nonlinearities. The characteristics and problems of augmentation (damping, stability, and feel) system mechanizations are then dealt with. The particular idiosyncracies of automatic control actuation and command augmentation schemes are stressed, because they constitute the major interfaces with the primary flight control system and an often highly variable vehicle response.
AI tools in computer based problem solving
NASA Technical Reports Server (NTRS)
Beane, Arthur J.
1988-01-01
The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.
Scaling up to address data science challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne R.
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
Scaling up to address data science challenges
Wendelberger, Joanne R.
2017-04-27
Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less
NASA Astrophysics Data System (ADS)
Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.
2017-02-01
Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.
Large space structures testing
NASA Technical Reports Server (NTRS)
Waites, Henry; Worley, H. Eugene
1987-01-01
There is considerable interest in the development of testing concepts and facilities that accurately simulate the pathologies believed to exist in future spacecraft. Both the Government and Industry have participated in the development of facilities over the past several years. The progress and problems associated with the development of the Large Space Structure Test Facility at the Marshall Flight Center are presented. This facility was in existence for a number of years and its utilization has run the gamut from total in-house involvement, third party contractor testing, to the mutual participation of other goverment agencies in joint endeavors.
2008-08-12
revive reforms, especially in the banking and insurance sectors. Yet it is considered likely that more urgent economic problems and impending...the world, trailing only the United States, Germany , and Russia. The bounty of India’s newly-super-wealthy is traced largely to phenomenal gains in the... deregulation , privatization, and tariff-reducing measures. Once seen as favoring domestic business and diffident about foreign involvement, New Delhi
Research and Theory on Predecision Processes.
1983-11-30
probl -sm detection process.........,.,......3.2 A problem detection taxonomy..*.......................,......3.6 Examples of the taxonomy gained from...large, and can be managed . A hierarchical tree structure is also necessary for distinguishing minor variations of ideas from major variations. Second...construct a scenario that involves forming a company to manufacture and market widgets. Widgets catch on, and soon every household has one, and the
ERIC Educational Resources Information Center
Batiste, Monica Lynn
2014-01-01
Of all the work that occurs within the P-12 education institutions, the interaction involving the teacher and pupil is the most significant contributing factor of student success (United States Department of Education, 2013). Yet, the problem of teacher absenteeism persists in schools throughout the United States. The accumulated results of…
ERIC Educational Resources Information Center
Hansson, Finn; Monsted, Mette
2012-01-01
Peer review of research programmes is changing. The problem is discussed through detailed study of a selection process to a call for collaborations in the energy sector for the European Institute of Innovation and Technology. The authors were involved in the application for a Knowledge Innovation Community. Through the analysis of the case the…
ERIC Educational Resources Information Center
Johnston, Lloyd D.; O'Malley, Patrick M.; Bachman, Jerald G.; Schulenberg, John E.
2011-01-01
The Monitoring the Future (MTF) study involves an ongoing series of national surveys of American adolescents and adults that has provided the nation with a vital window into the important, but largely hidden, problem behaviors of illegal drug use, alcohol use, tobacco use, anabolic steroid use, and psychotherapeutic drug use. For more than a third…
Politics of heat: the energy movement in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seelman, K.D.
This dissertation explores the claim that the energy problem involves a basic conflict of values between representatives of economic interests and of the public. Economic interests define the energy problems in narrow economic and technological terms, while public interests define the problem in broad societal and environmental terms. Ideological and policy positions have polarized around the following transientific problems: equity for present and future generations; the level of risk a society should take; tradeoff between laissez faire freedom and freedom and human rights to health, safety and environmental integrity, and, finally, who should decide, citizen or expert. The study ismore » based, in large part, on original materials - correspondence, interviews, reports and questionnaires - from a 4-year study on energy and values conducted by the National Council of Churches in which the author was a participant observer. The study process involved major US institutions - industry, government, labor, university, religion, and related spokespersons in the public-interest community. Findings indicate that both sides, frustrated by government bureaucracy, have designed alternative structures and techniques to resolve conflicts and expedite their goals, e.g., flexible ad hoc groups and mediation.« less
Machine Learning Approaches in Cardiovascular Imaging.
Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan
2017-10-01
Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.
Operational momentum in large-number addition and subtraction by 9-month-olds.
McCrink, Koleen; Wynn, Karen
2009-08-01
Recent studies on nonsymbolic arithmetic have illustrated that under conditions that prevent exact calculation, adults display a systematic tendency to overestimate the answers to addition problems and underestimate the answers to subtraction problems. It has been suggested that this operational momentum results from exposure to a culture-specific practice of representing numbers spatially; alternatively, the mind may represent numbers in spatial terms from early in development. In the current study, we asked whether operational momentum is present during infancy, prior to exposure to culture-specific representations of numbers. Infants (9-month-olds) were shown videos of events involving the addition or subtraction of objects with three different types of outcomes: numerically correct, too large, and too small. Infants looked significantly longer only at those incorrect outcomes that violated the momentum of the arithmetic operation (i.e., at too-large outcomes in subtraction events and too-small outcomes in addition events). The presence of operational momentum during infancy indicates developmental continuity in the underlying mechanisms used when operating over numerical representations.
Lower Sensitivity to Happy and Angry Facial Emotions in Young Adults with Psychiatric Problems
Vrijen, Charlotte; Hartman, Catharina A.; Lodder, Gerine M. A.; Verhagen, Maaike; de Jonge, Peter; Oldehinkel, Albertine J.
2016-01-01
Many psychiatric problem domains have been associated with emotion-specific biases or general deficiencies in facial emotion identification. However, both within and between psychiatric problem domains, large variability exists in the types of emotion identification problems that were reported. Moreover, since the domain-specificity of the findings was often not addressed, it remains unclear whether patterns found for specific problem domains can be better explained by co-occurrence of other psychiatric problems or by more generic characteristics of psychopathology, for example, problem severity. In this study, we aimed to investigate associations between emotion identification biases and five psychiatric problem domains, and to determine the domain-specificity of these biases. Data were collected as part of the ‘No Fun No Glory’ study and involved 2,577 young adults. The study participants completed a dynamic facial emotion identification task involving happy, sad, angry, and fearful faces, and filled in the Adult Self-Report Questionnaire, of which we used the scales depressive problems, anxiety problems, avoidance problems, Attention-Deficit Hyperactivity Disorder (ADHD) problems and antisocial problems. Our results suggest that participants with antisocial problems were significantly less sensitive to happy facial emotions, participants with ADHD problems were less sensitive to angry emotions, and participants with avoidance problems were less sensitive to both angry and happy emotions. These effects could not be fully explained by co-occurring psychiatric problems. Whereas this seems to indicate domain-specificity, inspection of the overall pattern of effect sizes regardless of statistical significance reveals generic patterns as well, in that for all psychiatric problem domains the effect sizes for happy and angry emotions were larger than the effect sizes for sad and fearful emotions. As happy and angry emotions are strongly associated with approach and avoidance mechanisms in social interaction, these mechanisms may hold the key to understanding the associations between facial emotion identification and a wide range of psychiatric problems. PMID:27920735
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banks, J.W., E-mail: banksj3@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Kapila, A.K., E-mail: kapila@rpi.edu
We describe an added-mass partitioned (AMP) algorithm for solving fluid–structure interaction (FSI) problems involving inviscid compressible fluids interacting with nonlinear solids that undergo large rotations and displacements. The computational approach is a mixed Eulerian–Lagrangian scheme that makes use of deforming composite grids (DCG) to treat large changes in the geometry in an accurate, flexible, and robust manner. The current work extends the AMP algorithm developed in Banks et al. [1] for linearly elasticity to the case of nonlinear solids. To ensure stability for the case of light solids, the new AMP algorithm embeds an approximate solution of a nonlinear fluid–solidmore » Riemann (FSR) problem into the interface treatment. The solution to the FSR problem is derived and shown to be of a similar form to that derived for linear solids: the state on the interface being fundamentally an impedance-weighted average of the fluid and solid states. Numerical simulations demonstrate that the AMP algorithm is stable even for light solids when added-mass effects are large. The accuracy and stability of the AMP scheme is verified by comparison to an exact solution using the method of analytical solutions and to a semi-analytical solution that is obtained for a rotating solid disk immersed in a fluid. The scheme is applied to the simulation of a planar shock impacting a light elliptical-shaped solid, and comparisons are made between solutions of the FSI problem for a neo-Hookean solid, a linearly elastic solid, and a rigid solid. The ability of the approach to handle large deformations is demonstrated for a problem of a high-speed flow past a light, thin, and flexible solid beam.« less
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
Physics in ;Real Life;: Accelerator-based Research with Undergraduates
NASA Astrophysics Data System (ADS)
Klay, J. L.
All undergraduates in physics and astronomy should have access to significant research experiences. When given the opportunity to tackle challenging open-ended problems outside the classroom, students build their problem-solving skills in ways that better prepare them for the workplace or future research in graduate school. Accelerator-based research on fundamental nuclear and particle physics can provide a myriad of opportunities for undergraduate involvement in hardware and software development as well as ;big data; analysis. The collaborative nature of large experiments exposes students to scientists of every culture and helps them begin to build their professional network even before they graduate. This paper presents an overview of my experiences - the good, the bad, and the ugly - engaging undergraduates in particle and nuclear physics research at the CERN Large Hadron Collider and the Los Alamos Neutron Science Center.
Progress in developing Poisson-Boltzmann equation solvers
Li, Chuan; Li, Lin; Petukh, Marharyta; Alexov, Emil
2013-01-01
This review outlines the recent progress made in developing more accurate and efficient solutions to model electrostatics in systems comprised of bio-macromolecules and nano-objects, the last one referring to objects that do not have biological function themselves but nowadays are frequently used in biophysical and medical approaches in conjunction with bio-macromolecules. The problem of modeling macromolecular electrostatics is reviewed from two different angles: as a mathematical task provided the specific definition of the system to be modeled and as a physical problem aiming to better capture the phenomena occurring in the real experiments. In addition, specific attention is paid to methods to extend the capabilities of the existing solvers to model large systems toward applications of calculations of the electrostatic potential and energies in molecular motors, mitochondria complex, photosynthetic machinery and systems involving large nano-objects. PMID:24199185
On the problem of stress singularities in bonded orthotropic materials
NASA Technical Reports Server (NTRS)
Erdogan, F.; Delale, F.
1976-01-01
The problem of stress singularities at the leading edge of a crack lying in the neighborhood of a bimaterial interface in bonded orthotropic materials is considered. The main objective is to study the effect of material orthotropy on the singular behavior of the stress state when the crack touches or intersects the interface. The results indicate that, due to the large number of material constants involved, in orthotropic materials, the power of stress singularity as well as the stress intensity factor can be considerably different than that found in the isotropic materials with the same stiffness ratio perpendicular to the crack.
Acute primary actinomycosis involving the hard palate of a diabetic patient.
de Andrade, Ana Luiza Dias Leite; Novaes, Márcio Menezes; Germano, Adriano Rocha; Luz, Kleber Giovanni; de Almeida Freitas, Roseana; Galvão, Hébel Cavalcanti
2014-03-01
Actinomycosis is a relatively rare infection caused by saprophytic bacteria of the oral cavity and gastrointestinal tract that can become pathogenic. The chronic hyperglycemia of diabetes mellitus induces events that promote structural changes in various tissues and are associated with problems in wound healing. This infection remains largely unknown to most clinicians because of its different presentations, and palatal involvement is extremely rare. This report describes the case of a 46-year-old woman who was diagnosed with actinomycosis involving the hard palate. The main clinical, histopathologic, and therapeutic characteristics and differential diagnosis of actinomycosis are reviewed. To date, 3 cases of actinomycosis involving the hard palate have been reported. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is treated. Leading eigenvalues (i.e., having maximal real part) of large matrices that arise from discretization are to be calculated. An efficient multigrid method for solving these problems is presented. The method begins by obtaining an initial approximation for the dominant subspace on a coarse level using a damped Jacobi relaxation. This proceeds until enough accuracy for the dominant subspace has been obtained. The resulting grid functions are then used as an initial approximation for appropriate eigenvalue problems. These problems are being solved first on coarse levels, followed by refinement until a desired accuracy for the eigenvalues has been achieved. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a non-standard way in which the right hand side of the coarse grid equations involves unknown parameters to be solved for on the coarse grid. This in particular leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem demonstrate the effectiveness of the method proposed. Using an FMG algorithm a solution to the level of discretization errors is obtained in just a few work units (less than 10), where a work unit is the work involved in one Jacobi relization on the finest level.
Schora, Donna M.
2016-01-01
Methicillin-resistant Staphylococcus aureus (MRSA) infection is a global health care problem. Large studies (e.g., >25,000 patients) show that active surveillance testing (AST) followed by contact precautions for positive patients is an effective approach for MRSA disease control. With this approach, the clinical laboratory will be asked to select what AST method(s) to use and to provide data monitoring outcomes of the infection prevention interventions. This minireview summarizes evidence for MRSA disease control, reviews the involvement of the laboratory, and provides examples of how to undertake a program cost analysis. Health care organizations with total MRSA clinical infections of >0.3/1,000 patient days or bloodstream infections of >0.03/1,000 patient days should implement a MRSA control plan. PMID:27307459
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
The construction of scientific software models is an integral part of doing science, both within NASA and within the scientific community at large. Typically, model-building is a time-intensive and painstaking process, involving the design of very large, complex computer programs. Despite the considerable expenditure of resources involved, completed scientific models cannot easily be distributed and shared with the larger scientific community due to the low-level, idiosyncratic nature of the implemented code. To address this problem, we have initiated a research project aimed at constructing a software tool called the Scientific Modeling Assistant. This tool provides automated assistance to the scientist in developing, using, and sharing software models. We describe the Scientific Modeling Assistant, and also touch on some human-machine interaction issues relevant to building a successful tool of this type.
Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques
2002-11-01
An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.
NASA Technical Reports Server (NTRS)
Heyward, Ann O.; Reilly, Charles H.; Walton, Eric K.; Mata, Fernando; Olen, Carl
1990-01-01
Creation of an Allotment Plan for the Fixed Satellite Service at the 1988 Space World Administrative Radio Conference (WARC) represented a complex satellite plan synthesis problem, involving a large number of planned and existing systems. Solutions to this problem at WARC-88 required the use of both automated and manual procedures to develop an acceptable set of system positions. Development of an Allotment Plan may also be attempted through solution of an optimization problem, known as the Satellite Location Problem (SLP). Three automated heuristic procedures, developed specifically to solve SLP, are presented. The heuristics are then applied to two specific WARC-88 scenarios. Solutions resulting from the fully automated heuristics are then compared with solutions obtained at WARC-88 through a combination of both automated and manual planning efforts.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
NASA Astrophysics Data System (ADS)
Aquilanti, Vincenzo; Bitencourt, Ana Carla P.; Ferreira, Cristiane da S.; Marzuoli, Annalisa; Ragni, Mirco
2008-11-01
The mathematical apparatus of quantum-mechanical angular momentum (re)coupling, developed originally to describe spectroscopic phenomena in atomic, molecular, optical and nuclear physics, is embedded in modern algebraic settings which emphasize the underlying combinatorial aspects. SU(2) recoupling theory, involving Wigner's 3nj symbols, as well as the related problems of their calculations, general properties, asymptotic limits for large entries, nowadays plays a prominent role also in quantum gravity and quantum computing applications. We refer to the ingredients of this theory—and of its extension to other Lie and quantum groups—by using the collective term of 'spin networks'. Recent progress is recorded about the already established connections with the mathematical theory of discrete orthogonal polynomials (the so-called Askey scheme), providing powerful tools based on asymptotic expansions, which correspond on the physical side to various levels of semi-classical limits. These results are useful not only in theoretical molecular physics but also in motivating algorithms for the computationally demanding problems of molecular dynamics and chemical reaction theory, where large angular momenta are typically involved. As for quantum chemistry, applications of these techniques include selection and classification of complete orthogonal basis sets in atomic and molecular problems, either in configuration space (Sturmian orbitals) or in momentum space. In this paper, we list and discuss some aspects of these developments—such as for instance the hyperquantization algorithm—as well as a few applications to quantum gravity and topology, thus providing evidence of a unifying background structure.
Young Black Men and the Criminal Justice System: A Growing National Problem.
ERIC Educational Resources Information Center
Mauer, Marc
The impact of the criminal justice system on Black male adults in the 20-to-29 year age group was examined. End results of the large-scale involvement of young Black men in the criminal justice system are considered, and the implications for crime control are discussed. Using data from Bureau of Justice Statistics and the Bureau of the Census…
ERIC Educational Resources Information Center
Schroeder, Julie; Lemieux, Catherine; Pogue, Rene
2008-01-01
A large body of descriptive literature demonstrates the problem of substance abuse in child welfare. The 1997 Adoption and Safe Families Act (ASFA) established time frames that make children's need for permanency the overriding priority in families involved with the child welfare system. Child welfare workers often lack proper knowledge and skill…
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Algorithms for solving large sparse systems of simultaneous linear equations on vector processors
NASA Technical Reports Server (NTRS)
David, R. E.
1984-01-01
Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.
Large Eddy Simulations and Turbulence Modeling for Film Cooling
NASA Technical Reports Server (NTRS)
Acharya, Sumanta
1999-01-01
The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Seismic classification through sparse filter dictionaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hickmann, Kyle Scott; Srinivasan, Gowri
We tackle a multi-label classi cation problem involving the relation between acoustic- pro le features and the measured seismogram. To isolate components of the seismo- grams unique to each class of acoustic pro le we build dictionaries of convolutional lters. The convolutional- lter dictionaries for the individual classes are then combined into a large dictionary for the entire seismogram set. A given seismogram is classi ed by computing its representation in the large dictionary and then comparing reconstruction accuracy with this representation using each of the sub-dictionaries. The sub-dictionary with the minimal reconstruction error identi es the seismogram class.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion
NASA Astrophysics Data System (ADS)
Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo
2017-11-01
Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.
A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu
The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutralmore » physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.« less
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis
1988-01-01
The applicability of H infinity control theory to the problems of large space structures (LSS) control was investigated. A complete evaluation to any technique as a candidate for large space structure control involves analytical evaluation, algorithmic evaluation, evaluation via simulation studies, and experimental evaluation. The results of analytical and algorithmic evaluations are documented. The analytical evaluation involves the determination of the appropriateness of the underlying assumptions inherent in the H infinity theory, the determination of the capability of the H infinity theory to achieve the design goals likely to be imposed on an LSS control design, and the identification of any LSS specific simplifications or complications of the theory. The resuls of the analytical evaluation are presented in the form of a tutorial on the subject of H infinity control theory with the LSS control designer in mind. The algorthmic evaluation of H infinity for LSS control pertains to the identification of general, high level algorithms for effecting the application of H infinity to LSS control problems, the identification of specific, numerically reliable algorithms necessary for a computer implementation of the general algorithms, the recommendation of a flexible software system for implementing the H infinity design steps, and ultimately the actual development of the necessary computer codes. Finally, the state of the art in H infinity applications is summarized with a brief outline of the most promising areas of current research.
Spectral theory of extreme statistics in birth-death systems
NASA Astrophysics Data System (ADS)
Meerson, Baruch
2008-03-01
Statistics of rare events, or large deviations, in chemical reactions and systems of birth-death type have attracted a great deal of interest in many areas of science including cell biochemistry, astrochemistry, epidemiology, population biology, etc. Large deviations become of vital importance when discrete (non-continuum) nature of a population of ``particles'' (molecules, bacteria, cells, animals or even humans) and stochastic character of interactions can drive the population to extinction. I will briefly review the novel spectral method [1-3] for calculating the extreme statistics of a broad class of birth-death processes and reactions involving a single species. The spectral method combines the probability generating function formalism with the Sturm-Liouville theory of linear differential operators. It involves a controlled perturbative treatment based on a natural large parameter of the problem: the average number of particles/individuals in a stationary or metastable state. For extinction (the first passage) problems the method yields accurate results for the extinction statistics and for the quasi-stationary probability distribution, including the tails, of metastable states. I will demonstrate the power of the method on the example of a branching and annihilation reaction, A ->-2.8mm2mm2A,,A ->-2.8mm2mm , representative of a rather general class of processes. *M. Assaf and B. Meerson, Phys. Rev. Lett. 97, 200602 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 74, 041115 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 75, 031122 (2007).
Large-Scale Pattern Discovery in Music
NASA Astrophysics Data System (ADS)
Bertin-Mahieux, Thierry
This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
The design of multiplayer online video game systems
NASA Astrophysics Data System (ADS)
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
2003-11-01
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Ritchie, R.H.; Sakakura, A.Y.
1956-01-01
The formal solutions of problems involving transient heat conduction in infinite internally bounded cylindrical solids may be obtained by the Laplace transform method. Asymptotic series representing the solutions for large values of time are given in terms of functions related to the derivatives of the reciprocal gamma function. The results are applied to the case of the internally bounded infinite cylindrical medium with, (a) the boundary held at constant temperature; (b) with constant heat flow over the boundary; and (c) with the "radiation" boundary condition. A problem in the flow of gas through a porous medium is considered in detail.
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.
The SERENDIP 2 SETI project: Current status
NASA Technical Reports Server (NTRS)
Bowyer, C. S.; Werthimer, D.; Donnelly, C.; Herrick, W.; Lampton, M.
1991-01-01
Over the past 30 years, interest in extraterrestrial intelligence has progressed from philosophical discussion to rigorous scientific endeavors attempting to make contact. Since it is impossible to assess the probability of success and the amount of telescope time needed for detection, Search for Extraterrestrial Intelligence (SETI) Projects are plagued with the problem of attaining the large amounts of time needed on the world's precious few large radio telescopes. To circumvent this problem, the Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) instrument operates autonomously in a piggyback mode utilizing whatever observing plan is chosen by the primary observer. In this way, large quantities of high-quality data can be collected in a cost-effective and unobtrusive manner. During normal operations, SERENDIP logs statistically significant events for further offline analysis. Due to the large number of terrestrial and near-space transmitters on earth, a major element of the SERENDIP project involves identifying and rejecting spurious signals from these sources. Another major element of the SERENDIP Project (as well as most other SETI efforts) is detecting extraterrestrial intelligence (ETI) signals. Events selected as candidate ETI signals are studied further in a targeted search program which utilizes between 24 to 48 hours of dedicated telescope time each year.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
Sorting permutations by prefix and suffix rearrangements.
Lintzmayer, Carla Negri; Fertin, Guillaume; Dias, Zanoni
2017-02-01
Some interesting combinatorial problems have been motivated by genome rearrangements, which are mutations that affect large portions of a genome. When we represent genomes as permutations, the goal is to transform a given permutation into the identity permutation with the minimum number of rearrangements. When they affect segments from the beginning (respectively end) of the permutation, they are called prefix (respectively suffix) rearrangements. This paper presents results for rearrangement problems that involve prefix and suffix versions of reversals and transpositions considering unsigned and signed permutations. We give 2-approximation and ([Formula: see text])-approximation algorithms for these problems, where [Formula: see text] is a constant divided by the number of breakpoints (pairs of consecutive elements that should not be consecutive in the identity permutation) in the input permutation. We also give bounds for the diameters concerning these problems and provide ways of improving the practical results of our algorithms.
NASA Astrophysics Data System (ADS)
Monmeyran, Corentin; Crowe, Iain F.; Gwilliam, Russell M.; Heidelberger, Christopher; Napolitani, Enrico; Pastor, David; Gandhi, Hemi H.; Mazur, Eric; Michel, Jürgen; Agarwal, Anuradha M.; Kimerling, Lionel C.
2018-04-01
Co-doping with fluorine is a potentially promising method for defect passivation to increase the donor electrical activation in highly doped n-type germanium. However, regular high dose donor-fluorine co-implants, followed by conventional thermal treatment of the germanium, typically result in a dramatic loss of the fluorine, as a result of the extremely large diffusivity at elevated temperatures, partly mediated by the solid phase epitaxial regrowth. To circumvent this problem, we propose and experimentally demonstrate two non-amorphizing co-implantation methods; one involving consecutive, low dose fluorine implants, intertwined with rapid thermal annealing and the second, involving heating of the target wafer during implantation. Our study confirms that the fluorine solubility in germanium is defect-mediated and we reveal the extent to which both of these strategies can be effective in retaining large fractions of both the implanted fluorine and, critically, phosphorus donors.
How number line estimation skills relate to neural activations in single digit subtraction problems
Berteletti, I.; Man, G.; Booth, J.R.
2014-01-01
The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated to differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved numerical magnitude and spatial processes. PMID:25497398
CSM solutions of rotating blade dynamics using integrating matrices
NASA Technical Reports Server (NTRS)
Lakin, William D.
1992-01-01
The dynamic behavior of flexible rotating beams continues to receive considerable research attention as it constitutes a fundamental problem in applied mechanics. Further, beams comprise parts of many rotating structures of engineering significance. A topic of particular interest at the present time involves the development of techniques for obtaining the behavior in both space and time of a rotor acted upon by a simple airload loading. Most current work on problems of this type use solution techniques based on normal modes. It is certainly true that normal modes cannot be disregarded, as knowledge of natural blade frequencies is always important. However, the present work has considered a computational structural mechanics (CSM) approach to rotor blade dynamics problems in which the physical properties of the rotor blade provide input for a direct numerical solution of the relevant boundary-and-initial-value problem. Analysis of the dynamics of a given rotor system may require solution of the governing equations over a long time interval corresponding to many revolutions of the loaded flexible blade. For this reason, most of the common techniques in computational mechanics, which treat the space-time behavior concurrently, cannot be applied to the rotor dynamics problem without a large expenditure of computational resources. By contrast, the integrating matrix technique of computational mechanics has the ability to consistently incorporate boundary conditions and 'remove' dependence on a space variable. For problems involving both space and time, this feature of the integrating matrix approach thus can generate a 'splitting' which forms the basis of an efficient CSM method for numerical solution of rotor dynamics problems.
Mesoscale modeling: solving complex flows in biology and biotechnology.
Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander
2013-07-01
Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
Implementation of proteomic biomarkers: making it work
Mischak, Harald; Ioannidis, John PA; Argiles, Angel; Attwood, Teresa K; Bongcam-Rudloff, Erik; Broenstrup, Mark; Charonis, Aristidis; Chrousos, George P; Delles, Christian; Dominiczak, Anna; Dylag, Tomasz; Ehrich, Jochen; Egido, Jesus; Findeisen, Peter; Jankowski, Joachim; Johnson, Robert W; Julien, Bruce A; Lankisch, Tim; Leung, Hing Y; Maahs, David; Magni, Fulvio; Manns, Michael P; Manolis, Efthymios; Mayer, Gert; Navis, Gerjan; Novak, Jan; Ortiz, Alberto; Persson, Frederik; Peter, Karlheinz; Riese, Hans H; Rossing, Peter; Sattar, Naveed; Spasovski, Goce; Thongboonkerd, Visith; Vanholder, Raymond; Schanstra, Joost P; Vlahou, Antonia
2012-01-01
While large numbers of proteomic biomarkers have been described, they are generally not implemented in medical practice. We have investigated the reasons for this shortcoming, focusing on hurdles downstream of biomarker verification, and describe major obstacles and possible solutions to ease valid biomarker implementation. Some of the problems lie in suboptimal biomarker discovery and validation, especially lack of validated platforms with well-described performance characteristics to support biomarker qualification. These issues have been acknowledged and are being addressed, raising the hope that valid biomarkers may start accumulating in the foreseeable future. However, successful biomarker discovery and qualification alone does not suffice for successful implementation. Additional challenges include, among others, limited access to appropriate specimens and insufficient funding, the need to validate new biomarker utility in interventional trials, and large communication gaps between the parties involved in implementation. To address this problem, we propose an implementation roadmap. The implementation effort needs to involve a wide variety of stakeholders (clinicians, statisticians, health economists, and representatives of patient groups, health insurance, pharmaceutical companies, biobanks, and regulatory agencies). Knowledgeable panels with adequate representation of all these stakeholders may facilitate biomarker evaluation and guide implementation for the specific context of use. This approach may avoid unwarranted delays or failure to implement potentially useful biomarkers, and may expedite meaningful contributions of the biomarker community to healthcare. PMID:22519700
Exploring quantum computing application to satellite data assimilation
NASA Astrophysics Data System (ADS)
Cheung, S.; Zhang, S. Q.
2015-12-01
This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
Forissier, R
1997-01-01
General Forissier, MO deals with a very particular problem which occurred within a short time period: the sanitary support units involvement in the large operations which took place in the semester back in 1958, during which the parachutists were opposed to units which were violently attempting to jump forcibly over a barrage which was still under construction. The article focuses on the respective losses inflicted on the forces involved and on the manner in which the evacuation of the wounded was undertaken.
Jouriles, Ernest N.; Rosenfield, David; McDonald, Renee; Mueller, Victoria
2014-01-01
This study examined whether child involvement in interparental conflict predicts child externalizing and internalizing problems in violent families. Participants were 119 families (mothers and children) recruited from domestic violence shelters. One child between the ages of 7 and 10 years in each family (50 female, 69 male) completed measures of involvement in their parents’ conflicts, externalizing problems, and internalizing problems. Mothers completed measures of child externalizing and internalizing problems, and physical intimate partner violence. Measures were completed at three assessments, spaced 6 months apart. Results indicated that children’s involvement in their parents’ conflicts was positively associated with child adjustment problems. These associations emerged in between-subjects and within-subjects analyses, and for child externalizing as well as internalizing problems, even after controlling for the influence of physical intimate partner violence. In addition, child involvement in parental conflicts predicted later child reports of externalizing problems, but child reports of externalizing problems did not predict later involvement in parental conflicts. These findings highlight the importance of considering children’s involvement in their parents’ conflicts in theory and clinical work pertaining to high-conflict families. PMID:24249486
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Yen, Cheng-Fang; Yang, Pinchen; Wang, Peng-Wei; Lin, Huang-Chi; Liu, Tai-Ling; Wu, Yu-Yu; Tang, Tze-Chun
2014-04-01
Few studies have compared the risks of mental health problems among the adolescents with different levels and different types of bullying involvement experiences. Bullying involvement in 6,406 adolescents was determined through use of the Chinese version of the School Bullying Experience Questionnaire. Data were collected regarding the mental health problems, including depression, suicidality, insomnia, general anxiety, social phobia, alcohol abuse, inattention, and hyperactivity/impulsivity. The association between experiences of bullying involvement and mental health problems was examined. The risk of mental health problems was compared among those with different levels/types of bullying involvement. The results found that being a victim of any type of bullying and being a perpetrator of passive bullying were significantly associated with all kinds of mental health problems, and being a perpetrator of active bullying was significantly associated with all kinds of mental health problems except for general anxiety. Victims or perpetrators of both passive and active bullying had a greater risk of some dimensions of mental health problems than those involved in only passive or active bullying. Differences in the risk of mental health problems were also found among adolescents involved in different types of bullying. This difference in comorbid mental health problems should be taken into consideration when assessing adolescents involved in different levels/types of bullying. © 2014.
Flouri, Eirini; Midouhas, Emily; Narayanan, Martina K
2016-07-01
This study investigated the cross-lagged relationship between father involvement and child problem behaviour across early-to-middle childhood, and tested whether temperament modulated any cross-lagged child behaviour effects on father involvement. It used data from the first four waves of the UK's Millennium Cohort Study, when children (50.3 % male) were aged 9 months, and 3, 5 and 7 years. The sample was 8302 families where both biological parents were co-resident across the four waves. Father involvement (participation in play and physical and educational activities with the child) was measured at ages 3, 5 and 7, as was child problem behaviour (assessed with the Strengths and Difficulties Questionnaire). Key child and family covariates related to father involvement and child problem behaviour were controlled. Little evidence was found that more father involvement predicted less child problem behaviour two years later, with the exception of father involvement at child's age 5 having a significant, but small, effect on peer problems at age 7. There were two child effects. More hyperactive children at age 3 had more involved fathers at age 5, and children with more conduct problems at age 3 had more involved fathers at age 5. Child temperament did not moderate any child behaviour effects on father involvement. Thus, in young, intact UK families, child adjustment appears to predict, rather than be predicted by, father involvement in early childhood. When children showed more problematic behaviours, fathers did not become less involved. In fact, early hyperactivity and conduct problems in children seemed to elicit more involvement from fathers. At school age, father involvement appeared to affect children's social adjustment rather than vice versa.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
NASA Astrophysics Data System (ADS)
Rasthofer, U.; Wall, W. A.; Gravemeier, V.
2018-04-01
A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.
New developments in Indian space policies and programmes—The next five years
NASA Astrophysics Data System (ADS)
Sridhara Murthi, K. R.; Bhaskaranarayana, A.; Madhusudana, H. N.
2010-02-01
Over past four decades Indian space programme has systematically acquired capabilities in space technologies and implemented its programmes with a high level of focus on societal applications. It is developed into a multi-dimensional programme where its strategy is directed towards diverse stake holders and actors such as government, users and beneficiaries including general public, industrial suppliers as well as customers, academia and other space agencies/international organisations. Over the next five years, the Indian space programme has charted an ambitious set of policies and programmes that aim to enhance impacts on society. The major task is to enlarge and diversify the services delivered to a large section of population affected by income, connectivity and digital divides. While efficacy of application of space based systems have been proven in several fields such as tele-education, water resources management, improving productivity of land and out reaching quality health services and others, the crux of the problem is to evolve sustainable and scalable delivery mechanisms on a very large scale and extending over large geographical areas. Essentially the problem shifts from being predominately a technology problem to one of a composite of economic, cultural and social problems. Tackling such problems would need renewal of policies relating to commercial as well as public service systems. Major programmatic initiatives are planned in the next five years involving new and upgraded technologies to expand services from space to fill the gaps and to improve economic efficiency. Thrust is also given to science and exploration mission beyond Chandrayaan-1 and some initial steps for the participation in human space flight. This paper discusses the policy and strategy perspectives of the programmes planned by Indian Space Research Organisation over next five years.
Practical problems in aggregating expert opinions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booker, J.M.; Picard, R.R.; Meyer, M.A.
1993-11-01
Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough aboutmore » the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.« less
A variational Bayes spatiotemporal model for electromagnetic brain mapping.
Nathoo, F S; Babul, A; Moiseev, A; Virji-Babul, N; Beg, M F
2014-03-01
In this article, we present a new variational Bayes approach for solving the neuroelectromagnetic inverse problem arising in studies involving electroencephalography (EEG) and magnetoencephalography (MEG). This high-dimensional spatiotemporal estimation problem involves the recovery of time-varying neural activity at a large number of locations within the brain, from electromagnetic signals recorded at a relatively small number of external locations on or near the scalp. Framing this problem within the context of spatial variable selection for an underdetermined functional linear model, we propose a spatial mixture formulation where the profile of electrical activity within the brain is represented through location-specific spike-and-slab priors based on a spatial logistic specification. The prior specification accommodates spatial clustering in brain activation, while also allowing for the inclusion of auxiliary information derived from alternative imaging modalities, such as functional magnetic resonance imaging (fMRI). We develop a variational Bayes approach for computing estimates of neural source activity, and incorporate a nonparametric bootstrap for interval estimation. The proposed methodology is compared with several alternative approaches through simulation studies, and is applied to the analysis of a multimodal neuroimaging study examining the neural response to face perception using EEG, MEG, and fMRI. © 2013, The International Biometric Society.
Equations of motion for coupled n-body systems
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1980-01-01
Computer program, developed to analyze spacecraft attitude dynamics, can be applied to large class of problems involving objects that can be simplified into component parts. Systems of coupled rigid bodies, point masses, symmetric wheels, and elastically flexible bodies can be analyzed. Program derives complete set of non-linear equations of motion in vectordyadic format. Numerical solutions may be printed out. Program is in FORTRAN IV for batch execution and has been implemented on IBM 360.
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanger, P.; Adam, E.; Grabinsky, G.
A conductor using flowing supercritical helium as a coolant has been adopted for the superconducting magnet being built by the Airco-Westinghouse team for the LCP at Oak Ridge National Laboratory. This conductor utilizes the ''rope in a pipe'' concept in which a large number of superconductor Nb/sub 3/Sn strands are formed into a cable and wrapped in a stainless steel jacket. The jacket material and conductor processing are given; the sequence of forming stages involved in producing the jacket is illustrated. It is found that the adoption of the iron-based superalloy JBK-75 as the jacket material revealed problems significantly differentmore » from those of the 304L and 21-6-9 stainless steel jackets. These problems included poor abrasion behavior, different reactions to cold reduction, and the presence of aluminum and titanium oxide floaters on the welds. The research underscores the fact that many material properties involved in proper selection are not well understood a priori and can only be determined by trial and error.« less
On the optimization of discrete structures with aeroelastic constraints
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Ashley, H.
1978-01-01
The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.
Sacks, J J; Lockwood, R; Hornreich, J; Sattin, R W
1996-06-01
To update data on fatal dog bites and see if past trends have continued. To merge data from vital records, the Humane Society of the United States, and searches of electronic news files. United States. U.S. residents dying in the U.S. from 1989 through 1994 from dog bites. We identified 109 dog bite-related fatalities, of which 57% were less than 10 years of age. The death rate for neonates was two orders of magnitude higher than for adults and the rate for children one order of magnitude higher. Of classifiable deaths, 22% involved an unrestrained dog off the owner's property, 18% involved a restrained dog on the owner's property, and 59% involved an unrestrained dog on the owner's property. Eleven attacks involved a sleeping infant; 19 dogs involved in fatal attacks had a prior history of aggression; and 19 of 20 classifiable deaths involved an unneutered dog. Pit bulls, the most commonly reported breed, were involved in 24 deaths; the next most commonly reported breeds were rottweilers (16) and German shepherds (10). The dog bite problem should be reconceptualized as a largely preventable epidemic. Breed-specific approaches to the control of dog bites do not address the issue that many breeds are involved in the problem and that most of the factors contributing to dog bites are related to the level of responsibility exercised by dog owners. To prevent dog bite-related deaths and injuries, we recommend public education about responsible dog ownership and dog bite prevention, stronger animal control laws, better resources for enforcement of these laws, and better reporting of bites. Anticipatory guidance by pediatric health care providers should address dog bite prevention.
de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.
2014-01-01
Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625
de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G
2014-01-01
Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.
Paternal ADHD symptoms and child conduct problems: is father involvement always beneficial?
Romirowsky, A M; Chronis-Tuscano, A
2014-09-01
Maternal psychopathology robustly predicts poor developmental and treatment outcomes for children with attention-deficit/hyperactivity disorder (ADHD). Despite the high heritability of ADHD, few studies have examined associations between paternal ADHD symptoms and child adjustment, and none have also considered degree of paternal involvement in childrearing. Identification of modifiable risk factors for child conduct problems is particularly important in this population given the serious adverse outcomes resulting from this comorbidity. This cross-sectional study examined the extent to which paternal involvement in childrearing moderated the association between paternal ADHD symptoms and child conduct problems among 37 children with ADHD and their biological fathers. Neither paternal ADHD symptoms nor involvement was independently associated with child conduct problems. However, the interaction between paternal ADHD symptoms and involvement was significant, such that paternal ADHD symptoms were positively associated with child conduct problems only when fathers were highly involved in childrearing. The presence of adult ADHD symptoms may determine whether father involvement in childrearing has a positive or detrimental influence on comorbid child conduct problems.
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
Reliable transfer of data from ground to space
NASA Technical Reports Server (NTRS)
Brosi, Fred
1993-01-01
This paper describes the problems involved in uplink of data from control centers on the ground to spacecraft, and explores the solutions to those problems, past. present. and future. The evolution of this process, from simple commanding to transfer of large volumes of data and commands is traced. The need for reliable end-to-end protocols for commanding and file transfer is demonstrated, and the shortcomings of both existing telecommand protocols and commercial products to meet this need are discussed. Recent developments in commercial protocols that may be adaptable to the mentioned operations environment are surveyed, and current efforts to develop a suite of protocols for reliable transfer in this environment are presented.
Philosophical Aspects of Space Science
NASA Astrophysics Data System (ADS)
Poghosyan, Gevorg
2015-07-01
The modern astronomy and physics are closely related to the philosophy. If in the past philosophy was largely confined to interpretations of the results obtained by the natural sciences, in the present times it becomes a full member of the scientific research process. Philosophy is currently involved not only in the methodological problems of the natural sciences and formulation process of the general conclusions. In most cases, the philosophical considerations are allowed to make a choice between the different physical hypotheses and assumptions. A unified approach to solving the problems of philosophy and natural sciences becomes more important as the physical and philosophical aspects are often intertwined, forming a mold that defines our knowledge of today's leading edge.
Dixon, Ramsay W; Youssef, George J; Hasking, Penelope; Yücel, Murat; Jackson, Alun C; Dowling, Nicki A
2016-07-01
Several factors are associated with an increased risk of adolescent problem gambling, including positive gambling attitudes, higher levels of gambling involvement, ineffective coping strategies and unhelpful parenting practices. It is less clear, however, how these factors interact or influence each other in the development of problem gambling behavior during adolescence. The aim of the current study was to simultaneously explore these predictors, with a particular focus on the extent to which coping skills and parenting styles may moderate the expected association between gambling involvement and gambling problems. Participants were 612 high school students. The data were analyzed using a zero-inflated Poisson (ZIP) regression model, controlling for gender. Although several variables predicted the number of symptoms associated with problem gambling, none of them predicted the probability of displaying any problem gambling. Gambling involvement fully mediated the relationship between positive gambling attitudes and gambling problem severity. There was a significant relationship between gambling involvement and problems at any level of problem focused coping, reference to others and inconsistent discipline. However, adaptive coping styles employed by adolescents and consistent disciplinary practices by parents were buffers of gambling problems at low levels of adolescent gambling involvement, but failed to protect adolescents when their gambling involvement was high. These findings indicate that research exploring the development of gambling problems is required and imply that coping and parenting interventions may have particular utility for adolescents who are at risk of development gambling problems but who are not gambling frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
The evaluation of shear deformation for contact analysis with large displacement
NASA Astrophysics Data System (ADS)
Nizam, Z. M.; Obiya, H.; Ijima, K.; Azhar, A. T. S.; Hazreek, Z. A. M.; Shaylinda, M. Z. N.
2018-04-01
A common problem encountered in the study of contact problem is the failure to obtain stable and accurate convergence result when the contact node is close to the element edge, which is referred as “critical area”. In previous studies, the modification of the element force equation to apply it to a node-element contact problem using the Euler-Bernoulli beam theory [1]. A simple single-element consists two edges and a contact point was used to simulate contact phenomenon of a plane frame. The modification was proven to be effective by the converge-ability of the unbalanced force at the tip of element edge, which enabled the contact node to “pass-through”, resulting in precise results. However, in another recent study, we discover that, if shear deformation based on Timoshenko beam theory is taken into consideration, a basic simply supported beam coordinate afforded a much simpler and more efficient technique for avoiding the divergence of the unbalanced force in the “critical area”. Using our unique and robust Tangent Stiffness Method, the improved equation can be used to overcome any geometrically nonlinear analyses, including those involving extremely large displacements.
Watson, Jean-Paul; Murray, Regan; Hart, William E.
2009-11-13
We report that the sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of themore » sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. Finally, the ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design.« less
Neural substrates of similarity and rule-based strategies in judgment
von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg
2014-01-01
Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099
Gambling and problem gambling among recently sentenced male prisoners in four New Zealand prisons.
Abbott, Max W; McKenna, Brian G; Giles, Lynne C
2005-01-01
Recently sentenced inmates in four New Zealand male prisons (N = 357) were interviewed to assess their gambling involvement, problem gambling and criminal offending. Frequent participation in and high expenditure on continuous forms of gambling prior to imprisonment were reported. Nineteen percent said they had been in prison for a gambling-related offence and most of this offending was property-related and non-violent. On the basis of their SOGS-R scores, 21% were lifetime probable pathological gamblers and 16% were probable pathological gamblers during the six months prior to imprisonment. Of the "current" problem gamblers, 51% reported gambling-related offending and 35% had been imprisoned for a crime of this type. Gambling-related offending increased with problem gambling severity. However, only five percent of problem gamblers said their early offending was gambling-related. The large majority reported other types of offending at this time. Few men had sought or received help for gambling problems prior to imprisonment or during their present incarceration. This highlights the potential for assessment and treatment programs in prison to reduce recidivism and adverse effects of problem gambling and gambling-related offending.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Anticipation of the landing shock phenomenon in flight simulation
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1987-01-01
An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome
2013-06-01
A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.
A Study of Novice Systems Analysis Problem Solving Behaviors Using Protocol Analysis
1992-09-01
conducted. Each subject was given the same task to perform. The task involved a case study (Appendix B) of a utility company’s customer order processing system...behavior (Ramesh, 1989). The task was to design a customer order processing system that utilized a centralized telephone answering service center...of the utility company’s customer order processing system that was developed based on information obtained by a large systems consulting firm during
ERIC Educational Resources Information Center
Levin, Ben
2013-01-01
This brief discusses the problem of scaling innovations in education in the United States so that they can serve very large numbers of students. It begins with a general discussion of the issues involved, develops a set of five criteria for assessing challenges of scaling, and then uses three programs widely discussed in the U.S. as examples of…
NASA Astrophysics Data System (ADS)
Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.
2018-05-01
Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.
NASA Astrophysics Data System (ADS)
Stranieri, Andrew; Yearwood, John; Pham, Binh
1999-07-01
The development of data warehouses for the storage and analysis of very large corpora of medical image data represents a significant trend in health care and research. Amongst other benefits, the trend toward warehousing enables the use of techniques for automatically discovering knowledge from large and distributed databases. In this paper, we present an application design for knowledge discovery from databases (KDD) techniques that enhance the performance of the problem solving strategy known as case- based reasoning (CBR) for the diagnosis of radiological images. The problem of diagnosing the abnormality of the cervical spine is used to illustrate the method. The design of a case-based medical image diagnostic support system has three essential characteristics. The first is a case representation that comprises textual descriptions of the image, visual features that are known to be useful for indexing images, and additional visual features to be discovered by data mining many existing images. The second characteristic of the approach presented here involves the development of a case base that comprises an optimal number and distribution of cases. The third characteristic involves the automatic discovery, using KDD techniques, of adaptation knowledge to enhance the performance of the case based reasoner. Together, the three characteristics of our approach can overcome real time efficiency obstacles that otherwise mitigate against the use of CBR to the domain of medical image analysis.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
Paternal ADHD Symptoms and Child Conduct Problems: Is Father Involvement Always Beneficial?
Romirowsky, Abigail Mintz; Chronis-Tuscano, Andrea
2013-01-01
Background Maternal psychopathology robustly predicts poor developmental and treatment outcomes for children with attention-deficit/hyperactivity disorder (ADHD). Despite the high heritability of ADHD, few studies have examined associations between paternal ADHD symptoms and child adjustment, and none have also considered degree of paternal involvement in childrearing. Identification of modifiable risk factors for child conduct problems is particularly important in this population given the serious adverse outcomes resulting from this comorbidity. Methods This cross-sectional study examined the extent to which paternal involvement in childrearing moderated the association between paternal ADHD symptoms and child conduct problems among 37 children with ADHD and their biological fathers. Results Neither paternal ADHD symptoms nor involvement was independently associated with child conduct problems. However, the interaction between paternal ADHD symptoms and involvement was significant, such that paternal ADHD symptoms were positively associated with child conduct problems only when fathers were highly involved in childrearing. Conclusions The presence of adult ADHD symptoms may determine whether father involvement in childrearing has a positive or detrimental influence on comorbid child conduct problems. PMID:25250402
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Donovan, J E; Jessor, R
1983-01-01
Analyses of data from two nationwide surveys of high school students, one carried out in 1974 and the other in 1978, suggest that problem drinking may be seen as yet another step along an underlying dimension of involvement with both licit and illicit drugs. The dimension of involvement with drugs consists of the following levels: nonuse of alcohol or illicit drugs; nonproblem use of alcohol; marijuana use; problem drinking; use of pills (amphetamines, barbiturates, hallucinogenic drugs); and the use of "hard drugs" such as cocaine or heroin. The dimension possesses excellent Guttman-scale properties in both national samples as well as in subsamples differing in gender and ethnic background. The ordering of the levels of involvement was confirmed by the ordering of the alcohol-drug involvement groups based on their mean scores on measures of psychosocial proneness for involvement in problem behavior. The excessive use of a licit drug, i.e., problem drinking, appears to indicate greater involvement in drug use than does the use of an illicit drug, marijuana. This finding points to the importance of distinguishing between use and problem use of drugs in efforts to understand adolescent drug involvement. PMID:6837819
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.
1999-10-14
Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less
NASA Astrophysics Data System (ADS)
O'Donoghue, Darragh E.; O'Connor, James; Crause, Lisa A.; Strumpfer, Francois; Strydom, Ockert J.; Brink, Janus D.; Sass, Craig; Wiid, Eben; Atad-Ettedgui, Eli
2010-07-01
The construction of the Southern African Large Telescope (SALT) was largely completed by the end of 2005. At the beginning of 2006, it was realized that the telescope's image quality suffered from optical aberrations, chiefly a focus gradient across the focal plane, but also accompanied by astigmatism and higher order aberrations. In the previous conference in this series, a paper was presented describing the optical system engineering investigation which had been conducted to diagnose the problem. This investigation exonerated the primary mirror as the cause, as well as the science instruments, and was isolated to the interface between the telescope and a major optical sub-system, the spherical aberration corrector (SAC). This is a complex sub-system of four aspheric mirrors which corrects the spherical aberration of the 11-m primary mirror. In the last two years, a solution to this problem was developed which involved removing the SAC from the telescope, installing a modification of the SAC/telescope interface, re-aligning and testing the four SAC mirrors and re-installation on the telescope. This paper describes the plan, discusses the details and shows progress to date and the current status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ficko-Blean, E.; Gregg, K; Adams, J
2009-01-01
Common features of the extracellular carbohydrate-active virulence factors involved in host-pathogen interactions are their large sizes and modular complexities. This has made them recalcitrant to structural analysis, and therefore our understanding of the significance of modularity in these important proteins is lagging. Clostridium perfringens is a prevalent human pathogen that harbors a wide array of large, extracellular carbohydrate-active enzymes and is an excellent and relevant model system to approach this problem. Here we describe the complete structure of C. perfringens GH84C (NagJ), a 1001-amino acid multimodular homolog of the C. perfringens ?-toxin, which was determined using a combination of smallmore » angle x-ray scattering and x-ray crystallography. The resulting structure reveals unprecedented insight into how catalysis, carbohydrate-specific adherence, and the formation of molecular complexes with other enzymes via an ultra-tight protein-protein interaction are spatially coordinated in an enzyme involved in a host-pathogen interaction.« less
Attitude maneuvers of a solar-powered electric orbital transfer vehicle
NASA Astrophysics Data System (ADS)
Jenkin, Alan B.
1992-08-01
Attitude maneuver requirements of a solar-powered electric orbital transfer vehicle have been studied in detail. This involved evaluation of the yaw, pitch, and roll profiles and associated angular accelerations needed to simultaneously steer the vehicle thrust vector and maintain the solar array pointed toward the sun. Maintaining the solar array pointed exactly at the sun leads to snap roll maneuvers which have very high (theoretically unbounded) accelerations, thereby imposing large torque requirements. The problem is exacerbated by the large solar arrays which are needed to generate the high levels of power needed by electric propulsion devices. A method of eliminating the snap roll maneuvers is presented. The method involves the determination of relaxed roll profiles which approximate a forced transition between alternate exact roll profiles and incur only small errors in solar array pointing. The method makes it feasible to perform the required maneuvers using currently available attitude control technology such as reaction wheels, hot gas jets, or gimballed main engines.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
On anthropic solutions of the cosmological constant problem
NASA Astrophysics Data System (ADS)
Banks, Tom; Dine, Michael; Motl, Lubos
2001-01-01
Motivated by recent work of Bousso and Polchinski (BP), we study theories which explain the small value of the cosmological constant using the anthropic principle. We argue that simultaneous solution of the gauge hierarchy problem is a strong constraint on any such theory. We exhibit three classes of models which satisfy these constraints. The first is a version of the BP model with precisely two large dimensions. The second involves 6-branes and antibranes wrapped on supersymmetric 3-cycles of Calabi-Yau manifolds, and the third is a version of the irrational axion model. All of them have possible problems in explaining the size of microwave background fluctuations. We also find that most models of this type predict that all constants in the low energy lagrangian, as well as the gauge groups and representation content, are chosen from an ensemble and cannot be uniquely determined from the fundamental theory. In our opinion, this significantly reduces the appeal of this kind of solution of the cosmological constant problem. On the other hand, we argue that the vacuum selection problem of string theory might plausibly have an anthropic, cosmological solution.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
Pharmacology of Ischemia-Reperfusion. Translational Research Considerations.
Prieto-Moure, Beatriz; Lloris-Carsí, José M; Barrios-Pitarque, Carlos; Toledo-Pereyra, Luis-H; Lajara-Romance, José María; Berda-Antolí, M; Lloris-Cejalvo, J M; Cejalvo-Lapeña, Dolores
2016-08-01
Ischemia-reperfusion (IRI) is a complex physiopathological mechanism involving a large number of metabolic processes that can eventually lead to cell apoptosis and ultimately tissue necrosis. Treatment approaches intended to reduce or palliate the effects of IRI are varied, and are aimed basically at: inhibiting cell apoptosis and the complement system in the inflammatory process deriving from IRI, modulating calcium levels, maintaining mitochondrial membrane integrity, reducing the oxidative effects of IRI and levels of inflammatory cytokines, or minimizing the action of macrophages, neutrophils, and other cell types. This study involved an extensive, up-to-date review of the bibliography on the currently most widely used active products in the treatment and prevention of IRI, and their mechanisms of action, in an aim to obtain an overview of current and potential future treatments for this pathological process. The importance of IRI is clearly reflected by the large number of studies published year after year, and by the variety of pathophysiological processes involved in this major vascular problem. A quick study of the evolution of IRI-related publications in PubMed shows that in a single month in 2014, 263 articles were published, compared to 806 articles in the entire 1990.
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
NASA Astrophysics Data System (ADS)
Steinberg, Marc
2011-06-01
This paper presents a selective survey of theoretical and experimental progress in the development of biologicallyinspired approaches for complex surveillance and reconnaissance problems with multiple, heterogeneous autonomous systems. The focus is on approaches that may address ISR problems that can quickly become mathematically intractable or otherwise impractical to implement using traditional optimization techniques as the size and complexity of the problem is increased. These problems require dealing with complex spatiotemporal objectives and constraints at a variety of levels from motion planning to task allocation. There is also a need to ensure solutions are reliable and robust to uncertainty and communications limitations. First, the paper will provide a short introduction to the current state of relevant biological research as relates to collective animal behavior. Second, the paper will describe research on largely decentralized, reactive, or swarm approaches that have been inspired by biological phenomena such as schools of fish, flocks of birds, ant colonies, and insect swarms. Next, the paper will discuss approaches towards more complex organizational and cooperative mechanisms in team and coalition behaviors in order to provide mission coverage of large, complex areas. Relevant team behavior may be derived from recent advances in understanding of the social and cooperative behaviors used for collaboration by tens of animals with higher-level cognitive abilities such as mammals and birds. Finally, the paper will briefly discuss challenges involved in user interaction with these types of systems.
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...
2016-08-10
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less
A review of feral cat control.
Robertson, Sheilah A
2008-08-01
Animal overpopulation including feral cats is an important global problem. There are many stakeholders involved in the feral cat debate over 'what to do about the problem', including those who consider them a nuisance, the public at risk from zoonotic disease, people who are concerned about the welfare of feral cats, those concerned with wildlife impacts, and the cats themselves. How best to control this population is controversial and has ranged from culling, relocation, and more recently 'trap neuter return' (TNR) methods. Data support the success of TNR in reducing cat populations, but to have a large impact it will have to be adopted on a far greater scale than it is currently practised. Non-surgical contraception is a realistic future goal. Because the feral cat problem was created by humans, concerted educational efforts on responsible pet ownership and the intrinsic value of animals is an integral part of a solution.
The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers
NASA Astrophysics Data System (ADS)
Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.
1992-01-01
Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of real-time, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities in a dynamic environment, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Continual coordination through shared activities
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Barrett, Anthony C.
2003-01-01
Interacting agents that interleave planning and execution must reach consensus on their commitments to each other. In domains where agents have varying degrees of interaction and different constraints on communication and computation, agents will require different coordination protocols in order to efficiently reach consensus in real time. We briefly describe a largely unexplored class of realtime, distributed planning problems (inspired by interacting spacecraft missions), new challenges they pose, and a general approach to solving the problems. These problems involve self-interested agents that have infrequent communication but collaborate on joint activities. We describe a Shared Activity Coordination (SHAC) framework that provides a decentralized algorithm for negotiating the scheduling of shared activities over the lifetimes of separate missions, a soft, real-time approach to reaching consensus during execution with limited communication, and a foundation for customizing protocols for negotiating planner interactions. We apply SHAC to a realistic simulation of interacting Mars missions and illustrate the simplicity of protocol development.
Beyond Λ CDM: Problems, solutions, and the road ahead
NASA Astrophysics Data System (ADS)
Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.
2016-06-01
Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.
Parallelization of the preconditioned IDR solver for modern multicore computer systems
NASA Astrophysics Data System (ADS)
Bessonov, O. A.; Fedoseyev, A. I.
2012-10-01
This paper present the analysis, parallelization and optimization approach for the large sparse matrix solver CNSPACK for modern multicore microprocessors. CNSPACK is an advanced solver successfully used for coupled solution of stiff problems arising in multiphysics applications such as CFD, semiconductor transport, kinetic and quantum problems. It employs iterative IDR algorithm with ILU preconditioning (user chosen ILU preconditioning order). CNSPACK has been successfully used during last decade for solving problems in several application areas, including fluid dynamics and semiconductor device simulation. However, there was a dramatic change in processor architectures and computer system organization in recent years. Due to this, performance criteria and methods have been revisited, together with involving the parallelization of the solver and preconditioner using Open MP environment. Results of the successful implementation for efficient parallelization are presented for the most advances computer system (Intel Core i7-9xx or two-processor Xeon 55xx/56xx).
Alcohol use and cultural change in an indigenous population: a case study from Venezuela.
Seale, J Paul; Shellenberger, Sylvia; Rodriguez, Carlos; Seale, Josiah D; Alvarado, Manuel
2002-01-01
To explore the historical and cultural context of problem drinking in a Latin American indigenous population and identify possible areas for intervention. Focus group discussions. Participants reported that prior to 1945, binge drinking and fighting were part of cultural festivals held several times each year. Alcohol was brewed in limited quantities by specially qualified individuals. Limited family violence and injuries resulted. Increasing contact with Western civilization resulted in year-round access to large supplies of commercial alcohol and exposure to alcohol-misusing role models. Increased heavy drinking and decreases in subsistence farming resulted in escalation of problems, including hunger, serious injury, family violence, divorce and legal problems. Communities are beginning to regain control by prohibiting sale of alcohol in villages, sponsoring alcohol-free celebrations, and increasing involvement in religious activities. Though alcohol may cause devastating consequences in cultures in transition, studies of community responses may identify useful strategies for reducing alcohol-related harm.
The Center for Multiscale Plasma Dynamics, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gombosi, Tamas I.
The University of Michigan participated in the joint UCLA/Maryland fusion science center focused on plasma physics problems for which the traditional separation of the dynamics into microscale and macroscale processes breaks down. These processes involve large scale flows and magnetic fields tightly coupled to the small scale, kinetic dynamics of turbulence, particle acceleration and energy cascade. The interaction between these vastly disparate scales controls the evolution of the system. The enormous range of temporal and spatial scales associated with these problems renders direct simulation intractable even in computations that use the largest existing parallel computers. Our efforts focused on twomore » main problems: the development of Hall MHD solvers on solution adaptive grids and the development of solution adaptive grids using generalized coordinates so that the proper geometry of inertial confinement can be taken into account and efficient refinement strategies can be obtained.« less
Origin of the moon - The collision hypothesis
NASA Technical Reports Server (NTRS)
Stevenson, D. J.
1987-01-01
Theoretical models of lunar origin involving one or more collisions between the earth and other large sun-orbiting bodies are examined in a critical review. Ten basic propositions of the collision hypothesis (CH) are listed; observational data on mass and angular momentum, bulk chemistry, volatile depletion, trace elements, primordial high temperatures, and orbital evolution are summarized; and the basic tenets of alternative models (fission, capture, and coformation) are reviewed. Consideration is given to the thermodynamics of large impacts, rheological and dynamical problems, numerical simulations based on the CH, disk evolution models, and the chemical implications of the CH. It is concluded that the sound arguments and evidence supporting the CH are not (yet) sufficient to rule out other hypotheses.
Design of a high capacity long range cargo aircraft
NASA Technical Reports Server (NTRS)
Weisshaar, Terrence A.
1994-01-01
This report examines the design of a long range cargo transport to attempt to reduce ton-mile shipping costs and to stimulate the air cargo market. This design effort involves the usual issues but must also include consideration of: airport terminal facilities; cargo loading and unloading; and defeating the 'square-cube' law to design large structures. This report reviews the long range transport design problem and several solutions developed by senior student design teams at Purdue University. The results show that it will be difficult to build large transports unless the infrastructure is changed and unless the basic form of the airplane changes so that aerodynamic and structural efficiencies are employed.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Measuring Family Problem Solving: The Family Problem Solving Diary.
ERIC Educational Resources Information Center
Kieren, Dianne K.
The development and use of the family problem-solving diary are described. The diary is one of several indicators and measures of family problem-solving behavior. It provides a record of each person's perception of day-to-day family problems (what the problem concerns, what happened, who got involved, what those involved did, how the problem…
Thamuku, Masego; Daniel, Marguerite
2013-01-01
In the context of AIDS, the Botswana Government has adopted a group therapy program to help large numbers of orphaned children cope with bereavement. This study explores the effectiveness of the therapy and examines how it interacts with cultural attitudes and practices concerning death. Ten orphaned children were involved in five rounds of data collection during a therapeutic retreat; eight social workers completed questionnaires concerning the effectiveness of the therapy. Most children were able to come to terms with their loss, face problems in their home and school environments, and envision ways of solving problems. All the children described benefits of group formation and the support it would provide when they returned to their home situations.
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
NASA Technical Reports Server (NTRS)
Ragan, R.
1982-01-01
General problems faced by hydrologists when using historical records, real time data, statistical analysis, and system simulation in providing quantitative information on the temporal and spatial distribution of water are related to the limitations of these data. Major problem areas requiring multispectral imaging-based research to improve hydrology models involve: evapotranspiration rates and soil moisture dynamics for large areas; the three dimensional characteristics of bodies of water; flooding in wetlands; snow water equivalents; runoff and sediment yield from ungaged watersheds; storm rainfall; fluorescence and polarization of water and its contained substances; discriminating between sediment and chlorophyll in water; role of barrier island dynamics in coastal zone processes; the relationship between remotely measured surface roughness and hydraulic roughness of land surfaces and stream networks; and modeling the runoff process.
Computational procedures for mixed equations with shock waves
NASA Technical Reports Server (NTRS)
Yu, N. J.; Seebass, R.
1974-01-01
This paper discusses the procedures we have developed to treat a canonical problem involving a mixed nonlinear equation with boundary data that imply a discontinuous solution. This equation arises in various physical contexts and is basic to the description of the nonlinear acoustic behavior of a shock wave near a caustic. The numerical scheme developed is of second order, treats discontinuities as such by applying the appropriate jump conditions across them, and eliminates the numerical dissipation and dispersion associated with large gradients. Our results are compared with the results of a first-order scheme and with those of a second-order scheme we have developed. The algorithm used here can easily be generalized to more complicated problems, including transonic flows with imbedded shocks.
On a production system using default reasoning for pattern classification
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Lowe, Carlyle M.
1990-01-01
This paper addresses an unconventional application of a production system to a problem involving belief specialization. The production system reduces a large quantity of low-level descriptions into just a few higher-level descriptions that encompass the problem space in a more tractable fashion. This classification process utilizes a set of descriptions generated by combining the component hierarchy of a physical system with the semantics of the terminology employed in its operation. The paper describes an application of this process in a program, constructed in C and CLIPS, that classifies signatures of electromechanical system configurations. The program compares two independent classifications, describing the actual and expected system configurations, in order to generate a set of contradictions between the two.
Kronbak, Lone Grønbæk; Vestergaard, Niels
2013-12-15
In most decision-making involving natural resources, the achievements of a given policy (e.g., improved ecosystem or biodiversity) are rather difficult to measure in monetary units. To address this problem, the current paper develops an environmental cost-effectiveness analysis (ECEA) to include intangible benefits in intertemporal natural resource problems. This approach can assist managers in prioritizing management actions as least cost solutions to achieve quantitative policy targets. The ECEA framework is applied to a selective gear policy case in Danish mixed trawl fisheries in Kattegat and Skagerrak. The empirical analysis demonstrates how a policy with large negative net benefits might be justified if the intangible benefits are included. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sources of Interactional Problems in a Survey of Racial/Ethnic Discrimination
Johnson, Timothy P.; Shariff-Marco, Salma; Willis, Gordon; Cho, Young Ik; Breen, Nancy; Gee, Gilbert C.; Krieger, Nancy; Grant, David; Alegria, Margarita; Mays, Vickie M.; Williams, David R.; Landrine, Hope; Liu, Benmei; Reeve, Bryce B.; Takeuchi, David; Ponce, Ninez A.
2014-01-01
Cross-cultural variability in respondent processing of survey questions may bias results from multiethnic samples. We analyzed behavior codes, which identify difficulties in the interactions of respondents and interviewers, from a discrimination module contained within a field test of the 2007 California Health Interview Survey. In all, 553 (English) telephone interviews yielded 13,999 interactions involving 22 items. Multilevel logistic regression modeling revealed that respondent age and several item characteristics (response format, customized questions, length, and first item with new response format), but not race/ethnicity, were associated with interactional problems. These findings suggest that item function within a multi-cultural, albeit English language, survey may be largely influenced by question features, as opposed to respondent characteristics such as race/ethnicity. PMID:26166949
Picture Archiving And Communication Systems (PACS): Introductory Systems Analysis Considerations
NASA Astrophysics Data System (ADS)
Hughes, Simon H. C.
1983-05-01
Two fundamental problems face any hospital or radiology department that is thinking about installing a Picture Archiving and Communications System (PACS). First, though the need for PACS already exists, much of the relevant technology is just beginning to be developed. Second, the requirements of each hospital are different, so that any attempts to market a single PACS design for use in large numbers of hospitals are likely to meet with the same problems as were experienced with general-purpose Hospital Information Systems. This paper outlines some of the decision processes involved in arriving at specifications for each module of a PACS and indicates design principles which should be followed in order to meet individual hospital requirements, while avoiding the danger of short-term systems obsolescence.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Dynamic simulations of geologic materials using combined FEM/DEM/SPH analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, J P; Johnson, S M
2008-03-26
An overview of the Lawrence Discrete Element Code (LDEC) is presented, and results from a study investigating the effect of explosive and impact loading on geologic materials using the Livermore Distinct Element Code (LDEC) are detailed. LDEC was initially developed to simulate tunnels and other structures in jointed rock masses using large numbers of polyhedral blocks. Many geophysical applications, such as projectile penetration into rock, concrete targets, and boulder fields, require a combination of continuum and discrete methods in order to predict the formation and interaction of the fragments produced. In an effort to model this class of problems, LDECmore » now includes implementations of Cosserat point theory and cohesive elements. This approach directly simulates the transition from continuum to discontinuum behavior, thereby allowing for dynamic fracture within a combined finite element/discrete element framework. In addition, there are many application involving geologic materials where fluid-structure interaction is important. To facilitate solution of this class of problems a Smooth Particle Hydrodynamics (SPH) capability has been incorporated into LDEC to simulate fully coupled systems involving geologic materials and a saturating fluid. We will present results from a study of a broad range of geomechanical problems that exercise the various components of LDEC in isolation and in tandem.« less
Adenoviral Gene Therapy Vectors Targeted to Prostate Cancer
2004-06-01
results from pre- clinical models into clinical trials . This problem has also been highlighted in Ad5 capsid mutation studies. Mutation of CAR and integrin...infectious eye disease in hospitals and eye 21. Harnett, G. B., and W. A. Newnham. 1981. Isolation of adenovirus type 19 clinics , from the male and female...promi- units or of large cDNAs such as the 7.1-kb ABCR gene nent in iris and ciliary body, with scattered positive cells involved in Stargardt disease
Leahy, P.P.
1982-01-01
The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)
Exact solution of large asymmetric traveling salesman problems.
Miller, D L; Pekny, J F
1991-02-15
The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Buono, Serafino
2013-07-01
These three single-case studies assessed the use of walker devices and microswitch technology for promoting ambulation behavior among persons with multiple disabilities. The walker devices were equipped with support and weight lifting features. The microswitch technology ensured that brief stimulation followed the participants' ambulation responses. The participants were two children (i.e., Study I and Study II) and one man (i.e., Study III) with poor ambulation performance. The ambulation efforts of the child in Study I involved regular steps, while those of the child in Study II involved pushing responses (i.e., he pushed himself forward with both feet while sitting on the walker's saddle). The man involved in Study III combined his poor ambulation performance with problem behavior, such as shouting or slapping his face. The results were positive for all three participants. The first two participants had a large increase in the number of steps/pushes performed during the ambulation events provided and in the percentages of those events that they completed independently. The third participant improved his ambulation performance as well as his general behavior (i.e., had a decline in problem behavior and an increase in indices of happiness). The wide-ranging implications of the results are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.
2016-01-01
Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.
Turbulence management: Application aspects
NASA Astrophysics Data System (ADS)
Hirschel, E. H.; Thiede, P.; Monnoyer, F.
1989-04-01
Turbulence management for the reduction of turbulent friction drag is an important topic. Numerous research programs in this field have demonstrated that valuable net drag reduction is obtainable by techniques which do not involve substantial, expensive modifications or redesign of existing aircraft. Hence, large projects aiming at short term introduction of turbulence management technology into airline service are presently under development. The various points that have to be investigated for this purpose are presented. Both design and operational aspects are considered, the first dealing with optimizing of turbulence management techniques at operating conditions, and the latter defining the technical problems involved by application of turbulence management to in-service aircraft. The cooperative activities of Airbus Industrie and its partners are cited as an example.
Genetic causes of male infertility.
Stouffs, Katrien; Seneca, Sara; Lissens, Willy
2014-05-01
Male infertility, affecting around half of the couples with a problem to get pregnant, is a very heterogeneous condition. Part of patients are having a defect in spermatogenesis of which the underlying causes (including genetic ones) remain largely unknown. The only genetic tests routinely used in the diagnosis of male infertility are the analyses for the presence of Yq microdeletions and/or chromosomal abnormalities. Various other single gene or polygenic defects have been proposed to be involved in male fertility. Yet, their causative effect often remains to be proven. The recent evolution in the development of whole genome-based techniques may help in clarifying the role of genes and other genetic factors involved in spermatogenesis and spermatogenesis defects. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Validating a large geophysical data set: Experiences with satellite-derived cloud parameters
NASA Technical Reports Server (NTRS)
Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie
1992-01-01
We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.
Father Involvement and Behavior Problems among Preadolescents at Risk of Maltreatment
Yoon, Susan; Bellamy, Jennifer L.; Kim, Wonhee; Yoon, Dalhee
2018-01-01
Although there is a well-established connection between father involvement and children’s positive behavioral development in general, this relation has been understudied in more vulnerable and high-risk populations. The aims of this study were to examine how the quantity (i.e., the amount of shared activities) and quality (i.e., perceived quality of the father-child relationship) of father involvement are differently related to internalizing and externalizing behavior problems among preadolescents at risk of maltreatment and test if these associations are moderated by father type and child maltreatment. A secondary data analysis was conducted using data from the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN). Generalized estimating equations analysis was performed on a sample of 499 preadolescents aged 12 years. The results indicated that higher quality of father involvement was associated with lower levels of internalizing and externalizing behavior problems whereas greater quantity of father involvement was associated with higher levels of internalizing and externalizing behavior problems. The positive association between the quantity of father involvement and behavior problems was stronger in adolescents who were physically abused by their father. The association between father involvement and behavior problems did not differ by the type of father co-residing in the home. The findings suggest that policies and interventions aimed at improving the quality of fathers’ relationships and involvement with their children may be helpful in reducing behavior problems in adolescents at risk of maltreatment. PMID:29491703
Some Progress in Large-Eddy Simulation using the 3-D Vortex Particle Method
NASA Technical Reports Server (NTRS)
Winckelmans, G. S.
1995-01-01
This two-month visit at CTR was devoted to investigating possibilities in LES modeling in the context of the 3-D vortex particle method (=vortex element method, VEM) for unbounded flows. A dedicated code was developed for that purpose. Although O(N(sup 2)) and thus slow, it offers the advantage that it can easily be modified to try out many ideas on problems involving up to N approx. 10(exp 4) particles. Energy spectrums (which require O(N(sup 2)) operations per wavenumber) are also computed. Progress was realized in the following areas: particle redistribution schemes, relaxation schemes to maintain the solenoidal condition on the particle vorticity field, simple LES models and their VEM extension, possible new avenues in LES. Model problems that involve strong interaction between vortex tubes were computed, together with diagnostics: total vorticity, linear and angular impulse, energy and energy spectrum, enstrophy. More work is needed, however, especially regarding relaxation schemes and further validation and development of LES models for VEM. Finally, what works well will eventually have to be incorporated into the fast parallel tree code.
Challenges for a Sustainable Financial Foundation for Antimicrobial Stewardship.
Dik, Jan-Willem H; Sinha, Bhanu
2017-03-30
Antimicrobial resistance is a worldwide threat and a problem with large clinical and economic impact. Antimicrobial Stewardship Programs are a solution to curb resistance development. A problem of resistance is a separation of actions and consequences, financial and clinical. Such a separation makes it difficult to create support among stakeholders leading to a lack of sense of responsibility. To counteract the resistance development it is important to perform diagnostics and know how to interpret the results. One should see diagnostics, therapy and resistance as one single process. Within this process all involved stakeholders need to work together on a more institutional level. We suggest therefore a solution: combining diagnostics and therapy into one single financial product . Such a product should act as an incentive to perform correct diagnostics. It also makes it easier to cover the costs of an antimicrobial stewardship program, which is often overlooked. Finally, such a product involves all stakeholders in the process and does not lay the costs at one stakeholder and the benefits somewhere else, solving the misbalance that is present nowadays.
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
Grabowski, Dan; Andersen, Tue Helms; Varming, Annemarie; Ommundsen, Christine; Willaing, Ingrid
2017-01-01
Objectives: Family involvement plays a key role in diabetes management. Problems and challenges related to type 2-diabetes often affect the whole family, and relatives are at increased risk of developing diabetes themselves. We highlight these issues in our objectives: (1) to uncover specific family problems associated with mutual involvement in life with type 2-diabetes and (2) to analytically look at ways of approaching these problems in healthcare settings. Methods: Qualitative data were gathered in participatory problem assessment workshops. The data were analysed in three rounds using radical hermeneutics. Results: Problems were categorized in six domains: knowledge, communication, support, everyday life, roles and worries. The final cross-analysis focusing on the link between family identity and healthcare authenticity provided information on how the six domains can be approached in healthcare settings. Conclusion: The study generated important knowledge about problems associated with family involvement in life with type 2 diabetes and about how family involvement can be supported in healthcare practice. PMID:28839943
Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper
2016-11-01
The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.
Wilson, P B
2013-10-01
Psoriasis is associated with serious comorbidities such as cardiovascular disease, type 2 diabetes, and metabolic syndrome. These comorbidities are related to low physical activity in the general population. Limited research has evaluated physical activity in psoriasis, and thus, the purpose of this investigation was to compare physical activity between individuals with and without psoriasis as well as explore the associations between measures of psoriasis severity and physical activity. Cross-sectional study using data from the 2003-2006 National Health and Nutrition Examination Survey. Self-reported psoriasis diagnosis and psoriasis severity were regressed on moderate/vigorous physical activity, as measured objectively by accelerometers. Measures of psoriasis severity included rating of psoriasis as a problem in life and body surface area involvement. A total of 4316 individuals had data on psoriasis, moderate/vigorous physical activity, and relevant covariates, with 3.6% (population weighted) of participants (N.=117) reporting a diagnosis of psoriasis. A psoriasis diagnosis was not associated with moderate/vigorous physical activity, and furthermore, body surface area involvement was not associated with moderate/vigorous physical activity among participants with psoriasis. However, every tertile increase in psoriasis as a problem in life was associated with 28% less moderate/vigorous physical activity, which remained significant after adjusting for covariates and removing outliers. While a diagnosis of psoriasis and body surface area involvement do not appear to be associated with less moderate/vigorous physical activity, individuals that rate their psoriasis to be a large problem engage in less moderate/vigorous physical activity.
NASA Astrophysics Data System (ADS)
Maries, Alexandru; Singh, Chandralekha
2018-01-01
An appropriate diagram is a required element of a solution building process in physics problem solving and it can transform a given problem into a representation that is easier to exploit for solving the problem. A major focus while helping introductory physics students learn problem solving is to help them appreciate that drawing diagrams facilitates problem solving. We conducted an investigation in which two different interventions were implemented during recitation quizzes throughout the semester in a large enrolment, algebra-based introductory physics course. Students were either (1) asked to solve problems in which the diagrams were drawn for them or (2) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed a rubric to score the problem solving performance of students in different intervention groups. We investigated two problems involving electric field and electric force and found that students who drew productive diagrams were more successful problem solvers and that a higher level of relevant detail in a student’s diagram corresponded to a better score. We also conducted think-aloud interviews with nine students who were at the time taking an equivalent introductory algebra-based physics course in order to gain insight into how drawing diagrams affects the problem solving process. These interviews supported some of the interpretations of the quantitative results. We end by discussing instructional implications of the findings.
Principal Component Geostatistical Approach for large-dimensional inverse problems
Kitanidis, P K; Lee, J
2014-01-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113
Principal Component Geostatistical Approach for large-dimensional inverse problems.
Kitanidis, P K; Lee, J
2014-07-01
The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.
Zhang, H H; Gao, S; Chen, W; Shi, L; D'Souza, W D; Meyer, R R
2013-03-21
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equallyspaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.
Zhang, H H; Gao, S; Chen, W; Shi, L; D’Souza, W D; Meyer, R R
2013-01-01
An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the Nested Partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are superior quality. PMID:23459411
The Cosmological Lithium Problem and the Measurement of the 7Be(n, α) Reaction at n_TOF-CERN
NASA Astrophysics Data System (ADS)
Musumarra, Agatino; Barbagallo, Massimo
A possible explanation of the so-called "Cosmological Lithium Problem", an important unsolved problem in Nuclear Astrophysics, involves large systematic uncertainties in the cross-sections of reactions leading to the destruction of 7Be during the Big-Bang Nucleosynthesis (BBN). Among these reactions, the 7Be(n, α) is the most uncertain. So far, only a single measurement with thermal neutrons has been performed. Therefore, BBN calculations had to rely on rather uncertain theoretical extrapolations. The short half-life of 7Be (53.29 d) and the low cross section have prevented, up to now, to obtain experimental data at keV neutron energies typical for BBN studies. We have measured for the first time at n_TOF the 7Be(n, α) reaction in a wide neutron energy range, from thermal up to 10 keV. This measurement has been performed, at the new beam line (EAR2) of the Neutron-Time-Of-Flight facility n_TOF at CERN. The two α-particles, emitted back-to-back in the reaction, have been detected by mean of sandwiches of silicon detectors and, by exploiting the coincidence technique, we were able to suppress the large γ and n-induced background. The 7Be isotope production and purification has been performed by PSI-Zurich Switzerland.
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less
Examining Middle School Pre-Service Teachers' Knowledge of Fraction Division Interpretations
ERIC Educational Resources Information Center
Alenazi, Ali
2016-01-01
This study investigated 11 pre-service middle school teachers' solution strategies for exploring their knowledge of fraction division interpretations. Each participant solved six fraction division problems. The problems were organized into two sets: symbolic problems (involving numbers only) and contextual problems (involving measurement…
Maguire, Elizabeth M; Bokhour, Barbara G; Wagner, Todd H; Asch, Steven M; Gifford, Allen L; Gallagher, Thomas H; Durfee, Janet M; Martinello, Richard A; Elwy, A Rani
2016-11-11
Many healthcare organizations have developed disclosure policies for large-scale adverse events, including the Veterans Health Administration (VA). This study evaluated VA's national large-scale disclosure policy and identifies gaps and successes in its implementation. Semi-structured qualitative interviews were conducted with leaders, hospital employees, and patients at nine sites to elicit their perceptions of recent large-scale adverse events notifications and the national disclosure policy. Data were coded using the constructs of the Consolidated Framework for Implementation Research (CFIR). We conducted 97 interviews. Insights included how to handle the communication of large-scale disclosures through multiple levels of a large healthcare organization and manage ongoing communications about the event with employees. Of the 5 CFIR constructs and 26 sub-constructs assessed, seven were prominent in interviews. Leaders and employees specifically mentioned key problem areas involving 1) networks and communications during disclosure, 2) organizational culture, 3) engagement of external change agents during disclosure, and 4) a need for reflecting on and evaluating the policy implementation and disclosure itself. Patients shared 5) preferences for personal outreach by phone in place of the current use of certified letters. All interviewees discussed 6) issues with execution and 7) costs of the disclosure. CFIR analysis reveals key problem areas that need to be addresses during disclosure, including: timely communication patterns throughout the organization, establishing a supportive culture prior to implementation, using patient-approved, effective communications strategies during disclosures; providing follow-up support for employees and patients, and sharing lessons learned.
Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro
2011-04-14
Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.
Wiener, J M; Ehbauer, N N; Mallot, H A
2009-09-01
For large numbers of targets, path planning is a complex and computationally expensive task. Humans, however, usually solve such tasks quickly and efficiently. We present experiments studying human path planning performance and the cognitive processes and heuristics involved. Twenty-five places were arranged on a regular grid in a large room. Participants were repeatedly asked to solve traveling salesman problems (TSP), i.e., to find the shortest closed loop connecting a start location with multiple target locations. In Experiment 1, we tested whether humans employed the nearest neighbor (NN) strategy when solving the TSP. Results showed that subjects outperform the NN-strategy, suggesting that it is not sufficient to explain human route planning behavior. As a second possible strategy we tested a hierarchical planning heuristic in Experiment 2, demonstrating that participants first plan a coarse route on the region level that is refined during navigation. To test for the relevance of spatial working memory (SWM) and spatial long-term memory (LTM) for planning performance and the planning heuristics applied, we varied the memory demands between conditions in Experiment 2. In one condition the target locations were directly marked, such that no memory was required; a second condition required participants to memorize the target locations during path planning (SWM); in a third condition, additionally, the locations of targets had to retrieved from LTM (SWM and LTM). Results showed that navigation performance decreased with increasing memory demands while the dependence on the hierarchical planning heuristic increased.
Dynamic Flow Management Problems in Air Transportation
NASA Technical Reports Server (NTRS)
Patterson, Sarah Stock
1997-01-01
In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.
NASA Technical Reports Server (NTRS)
Djorgovski, George
1993-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.
NASA Technical Reports Server (NTRS)
Djorgovski, Stanislav
1992-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.
ReOpt[trademark] V2.0 user guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M K; Bryant, J L
1992-10-01
Cleaning up the large number of contaminated waste sites at Department of Energy (DOE) facilities in the US presents a large and complex problem. Each waste site poses a singular set of circumstances (different contaminants, environmental concerns, and regulations) that affect selection of an appropriate response. Pacific Northwest Laboratory (PNL) developed ReOpt to provide information about the remedial action technologies that are currently available. It is an easy-to-use personal computer program and database that contains data about these remedial technologies and auxiliary data about contaminants and regulations. ReOpt will enable engineers and planners involved in environmental restoration efforts to quicklymore » identify potentially applicable environmental restoration technologies and access corresponding information required to select cleanup activities for DOE sites.« less
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Dowling, Nicki A; Shandley, Kerrie A; Oldenhof, Erin; Affleck, Julia M; Youssef, George J; Frydenberg, Erica; Thomas, Shane A; Jackson, Alun C
2017-10-01
Although parenting practices are articulated as underlying mechanisms or protective factors in several theoretical models, their role in the intergenerational transmission of gambling problems has received limited research attention. This study therefore examined the degree to which parenting practices (positive parenting, parental involvement, and inconsistent discipline) moderated the intergenerational transmission of paternal and maternal problem gambling. Students aged 12-18 years (N = 612) recruited from 17 Australian secondary schools completed a survey measuring parental problem gambling, problem gambling severity, and parenting practices. Participants endorsing paternal problem gambling (23.3%) were 4.3 times more likely to be classified as at-risk/problem gamblers than their peers (5.4%). Participants endorsing maternal problem gambling (6.9%) were no more likely than their peers (4.0%) to be classified as at-risk/problem gamblers. Paternal problem gambling was a significant predictor of offspring at-risk/problem gambling after controlling for maternal problem gambling and participant demographic characteristics. The relationship between maternal problem gambling and offspring at-risk/problem gambling was buffered by parental involvement. Paternal problem gambling may be important in the development of adolescent at-risk/problem gambling behaviours and higher levels of parental involvement buffers the influence of maternal problem gambling in the development of offspring gambling problems. Further research is therefore required to identify factors that attenuate the seemingly greater risk of transmission associated with paternal gambling problems. Parental involvement is a potential candidate for prevention and intervention efforts designed to reduce the intergenerational transmission of gambling problems. (Am J Addict 2017;26:707-712). © 2017 American Academy of Addiction Psychiatry.
Lymphomas or leukemia presenting as ovarian tumors. An analysis of 42 cases.
Osborne, B M; Robboy, S J
1983-11-15
Forty cases of ovarian lymphoma and two of extramedullary leukemia were examined with emphasis on histologic types correlated with age, modes of presentation, operative findings, including frequency of bilaterality and omental spread, clinical course following therapy, and problems in differential diagnosis. Although most cases were referred with diagnoses other than lymphoma (granulosa cell tumor or dysgerminoma, occasionally anaplastic tumor, Krukenberg tumor, or metastatic breast carcinoma), utilization of sections cut at 4 mu and stained with hematoxylin and eosin, or sections stained by the methyl green pyronine (MGP), naphthol-ASD esterase (NASD) or periodic acid-Schiff (PAS) methods helped bring out the lymphoid or hematopoietic nature of the cells. Sixteen patients were under 20 years of age. They had small noncleaved cell lymphoma (undifferentiated Burkitt's and non-Burkitt's, 10 cases), diffuse immunoblastic large cell lymphoma (4 cases), or acute granulocytic leukemia (2 cases). Twenty-six patients were 29 to 74 years of age and had diffuse large cell lymphoma (10 cases), diffuse immunoblastic large cell lymphoma (9 cases), follicular (nodular) lymphoma (6 cases) or small noncleaved cell lymphoma (1 case). Pain with an abdominal or pelvic mass was the most common presentation. Nine tumors were discovered during investigation of other gynecologic complaints. At laparotomy, the tumors in 55% of cases involved both ovaries, and in 64% also involved extragonadal sites (usually omentum, fallopian tubes, or lymph nodes). Seventeen patients had tumor affecting one ovary, seven of these without any evidence of extragonadal spread. Forty-two percent (15) of 37 patients with follow-up were alive after 2 years. Only nine patients survived more than 5 years; two subsequently died of lymphoma. Favorable prognostic features included: (1) FIGO stage IA; (2) unilateral ovarian involvement; (3) focal involvement of one ovary; and (4) follicular (nodular) lymphoma.
Food production -- problems and prospects.
Anifowoshe, T O
1990-03-01
Improvements are needed in balancing the problems associated with population growth and food production. A review of the problems of rapid population growth and declining food production and suggestions for resolution are given. World population has increased over the past 10 years by 760 million, which is equal to adding the combined population of Africa and South America. Future increases are expected to bring total population to 6.1 billion by the year 2000 and 8.2 billion in 2025 (exponential increases). Food production/capita has declined since 1971 in the world and in Nigeria, particularly in the recent past. The food production problem is technical, environmental, social, political, and economic. Various scientific and technological methods for increasing food production are identified: mechanization, irrigation, use of fertilizers, control of weeds and insects, new varieties of farm animals or high-yielding strains of grain, land reclamation, soil conservation, river basin development, adequate storage facilities, infrastructure development, and birth control. Economic and social approaches involve short-term and long-term strategies in social readjustment and institutional change. For instance, large scale farmers should become contract growers for certain firms. Bureaucratic red tape should be eliminated in institutions which provide agricultural services. Environmental problems need urgent attention. Some of these problems are soil erosion from mechanization, water salinization from irrigation, accumulation of DDT in food and water and animal life from pesticide use, and water pollution from chemical fertilizers. Food production can be increased with more ecologically sound practices. Information about weather and weather forecasting allows for more suitable land management. The influence of rainfall (the amount and distribution) in Nigeria is greater than any other climatic factor. Solar radiation is a significant feature in production of dry matter and yield. Shifting cultivation and land tenure systems should involve conservation farming techniques. organic manures and appropriate use of chemical fertilizers can raise soil fertility. Other problems are identified as the spread of bilharzia and the settlement of nomadic tribes.
Tracking problem solving by multivariate pattern analysis and Hidden Markov Model algorithms.
Anderson, John R
2012-03-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application involves using fMRI activity to track what students are doing as they solve a sequence of algebra problems. The methodology achieves considerable accuracy at determining both what problem-solving step the students are taking and whether they are performing that step correctly. The second "model discovery" application involves using statistical model evaluation to determine how many substates are involved in performing a step of algebraic problem solving. This research indicates that different steps involve different numbers of substates and these substates are associated with different fluency in algebra problem solving. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Blackmore, Lars; Wolf, Michael; Fathpour, Nanaz; Newman, Claire; Elfes, Alberto
2009-01-01
Hot air (Montgolfiere) balloons represent a promising vehicle system for possible future exploration of planets and moons with thick atmospheres such as Venus and Titan. To go to a desired location, this vehicle can primarily use the horizontal wind that varies with altitude, with a small help of its own actuation. A main challenge is how to plan such trajectory in a highly nonlinear and time-varying wind field. This paper poses this trajectory planning as a graph search on the space-time grid and addresses its computational aspects. When capturing various time scales involved in the wind field over the duration of long exploration mission, the size of the graph becomes excessively large. We show that the adjacency matrix of the graph is block-triangular, and by exploiting this structure, we decompose the large planning problem into several smaller subproblems, whose memory requirement stays almost constant as the problem size grows. The approach is demonstrated on a global reachability analysis of a possible Titan mission scenario.
Advantages of Parallel Processing and the Effects of Communications Time
NASA Technical Reports Server (NTRS)
Eddy, Wesley M.; Allman, Mark
2000-01-01
Many computing tasks involve heavy mathematical calculations, or analyzing large amounts of data. These operations can take a long time to complete using only one computer. Networks such as the Internet provide many computers with the ability to communicate with each other. Parallel or distributed computing takes advantage of these networked computers by arranging them to work together on a problem, thereby reducing the time needed to obtain the solution. The drawback to using a network of computers to solve a problem is the time wasted in communicating between the various hosts. The application of distributed computing techniques to a space environment or to use over a satellite network would therefore be limited by the amount of time needed to send data across the network, which would typically take much longer than on a terrestrial network. This experiment shows how much faster a large job can be performed by adding more computers to the task, what role communications time plays in the total execution time, and the impact a long-delay network has on a distributed computing system.
Trends in modern system theory
NASA Technical Reports Server (NTRS)
Athans, M.
1976-01-01
The topics considered are related to linear control system design, adaptive control, failure detection, control under failure, system reliability, and large-scale systems and decentralized control. It is pointed out that the design of a linear feedback control system which regulates a process about a desirable set point or steady-state condition in the presence of disturbances is a very important problem. The linearized dynamics of the process are used for design purposes. The typical linear-quadratic design involving the solution of the optimal control problem of a linear time-invariant system with respect to a quadratic performance criterion is considered along with gain reduction theorems and the multivariable phase margin theorem. The stumbling block in many adaptive design methodologies is associated with the amount of real time computation which is necessary. Attention is also given to the desperate need to develop good theories for large-scale systems, the beginning of a microprocessor revolution, the translation of the Wiener-Hopf theory into the time domain, and advances made in dynamic team theory, dynamic stochastic games, and finite memory stochastic control.
The changing pattern of ground-water development on Long Island, New York
Heath, Ralph C.; Foxworthy, B.L.; Cohen, Philip M.
1966-01-01
Ground-water development on Long Island has followed a pattern that has reflected changing population trends, attendant changes in the use and disposal of water, and the response of the hydrologic system to these changes. The historic pattern of development has ranged from individually owned shallow wells tapping glacial deposits to large-capacity public-supply wells tapping deep artesian aquifers. Sewage disposal has ranged from privately owned cesspools to modern large-capacity sewage-treatment plants discharging more than 70 mgd of water to the sea. At present (1965), different parts of long Island are characterized by different stages of ground-water development. In parts of Suffolk County in eastern long Island, development is similar to the earliest historical stages. Westward toward New York City, ground-water development becomes more intensive and complex, and the attendant problems become more acute. The alleviation of present problems and those that arise in the future will require management decisions based on the soundest possible knowledge of the hydrologic system, including an understanding of the factors involved in the changing pattern of ground-water development on the island.
Sciammarella, C A; Gilbert, J A
1976-09-01
Utilizing the light scattering property of transparent media, holographic interferometry is applied to the measurement of displacement at the interior planes of three dimensional bodies. The use of a double beam illumination and the introduction of a fictitious displacement make it feasible to obtain information corresponding to components of displacement projected on the scattering plane. When the proposed techniques are invoked, it is possible to eliminate the use of a matching index of refraction fluid in many problems involving symmetrically loaded prismatic bodies. Scattered light holographic interferometry is limited in its use to small changes in the index of refraction and to low values of relative retardation. In spite of these restrictions, a large number of technical problems in both statics and dynamics can be solved.
Efficient searching in meshfree methods
NASA Astrophysics Data System (ADS)
Olliff, James; Alford, Brad; Simkins, Daniel C.
2018-04-01
Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hainsworth, S.V.; Page, T.F.; Sjoestroem, H.
1997-05-01
Carbon nitride (CN{sub x}) thin films (0.18 < x < 0.43), deposited by magnetron sputtering of C in a N{sub 2} discharge, have been observed to be extremely resistant to plastic deformation during surface contact (i.e., exhibit a purely elastic response over large strains). Elastic recoveries as high as 90% have been measured by nanoindentation. This paper addresses the problems of estimating Young`s modulus (E) and hardness (H) in such cases and shows how different strategies involving analysis of both loading and unloading curves and measuring the work of indentation each present their own problems. The results of some cyclicmore » contact experiments are also presented and possible deformation mechanisms in the fullerene-like CN{sub x} structures discussed.« less
NASA Technical Reports Server (NTRS)
Wang, Lui; Valenzuela-Rendon, Manuel
1993-01-01
The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.
NASA Astrophysics Data System (ADS)
Akcay, Hakan; Yager, Robert
2010-10-01
The purpose of this study was to investigate the advantages of an approach to instruction using current problems and issues as curriculum organizers and illustrating how teaching must change to accomplish real learning. The study sample consisted of 41 preservice science teachers (13 males and 28 females) in a model science teacher education program. Both qualitative and quantitative research methods were used to determine success with science discipline-specific “Societal and Educational Applications” courses as one part of a total science teacher education program at a large Midwestern university. Students were involved with idea generation, consideration of multiple points of views, collaborative inquiries, and problem solving. All of these factors promoted grounded instruction using constructivist perspectives that situated science with actual experiences in the lives of students.
Smith, Erin N.; Grau, Josefina M.; Duran, Petra A.; Castellanos, Patricia
2013-01-01
We examined the relations between maternal depressive symptoms and child internalizing and externalizing problems in a sample of 125 adolescent Latina mothers (primarily Puerto Rican) and their toddlers. We also tested the influence of mother-reported partner child care involvement on child behavior problems and explored mother-reported partner characteristics that related to this involvement. Results suggested that maternal depressive symptoms related to child internalizing and externalizing problems when accounting for contextual risk factors. Importantly, these symptoms mediated the link between life stress and child behavior problems. Mother-reported partner child care interacted with maternal depressive symptoms for internalizing, not externalizing, problems. Specifically, depressive symptoms related less strongly to internalizing problems at higher levels of partner child care than at lower levels. Participants with younger partners, co-residing partners, and in longer romantic relationships reported higher partner child care involvement. Results are discussed considering implications for future research and interventions for mothers, their children, and their partners. PMID:24339474
Incentive Control Strategies for Decision Problems with Parametric Uncertainties
NASA Astrophysics Data System (ADS)
Cansever, Derya H.
The central theme of this thesis is the design of incentive control policies in large scale systems with hierarchical decision structures, under the stipulation that the objective functionals of the agents at the lower level of the hierarchy are uncertain to the top-level controller (the leader). These uncertainties are modeled as a finite -dimensional parameter vector whose exact value constitutes private information to the relevant agent at the lower level. The approach we have adopted is to design incentive policies for the leader such that the dependence of the decision of the agents on the uncertain parameter is minimized. We have identified several classes of problems for which this approach is feasible. In particular, we have constructed policies whose performance is arbitrarily close to the solution of a version of the same problem that does not involve uncertainties. We have also shown that for a certain class of problem wherein the leader observes a linear combination of the agents' decisions, the leader can achieve the performance he would obtain if he had observed each decision separately.
Swallowing Disorders in Schizophrenia.
Kulkarni, Deepika P; Kamath, Vandan D; Stewart, Jonathan T
2017-08-01
Disorders of swallowing are poorly characterized but quite common in schizophrenia. They are a source of considerable morbidity and mortality in this population, generally as a result of either acute asphyxia from airway obstruction or more insidious aspiration and pneumonia. The death rate from acute asphyxia may be as high as one hundred times that of the general population. Most swallowing disorders in schizophrenia seem to fall into one of two categories, changes in eating and swallowing due to the illness itself and changes related to psychotropic medications. Behavioral changes related to the illness are poorly understood and often involve eating too quickly or taking inappropriately large boluses of food. Iatrogenic problems are mostly related to drug-induced extrapyramidal side effects, including drug-induced parkinsonism, dystonia, and tardive dyskinesia, but may also include xerostomia, sialorrhea, and changes related to sedation. This paper will provide an overview of common swallowing problems encountered in patients with schizophrenia, their pathophysiology, and management. While there is a scarcity of quality evidence in the literature, a thorough history and examination will generally elucidate the predominant problem or problems, often leading to effective management strategies.
Residence arrangements and well-being: a study of Norwegian adolescents.
Naevdal, Folkvard; Thuen, Frode
2004-11-01
The purpose of this study was to assess any differences in psychosocial problems among adolescents living with both parents, or with their mother or their father. Any benefits of living with a same-sex parent compared to a parent of the opposite sex, was also analysed. A total of 1,686 adolescents aged 14-15 years participated from 29 schools in Hordaland county, including schools in downtown Bergen and more rural areas. The findings revealed significantly more psychosocial problems among the adolescents living with one parent compared to both parents. Significant differences were also observed between adolescents living in mother custody compared to father custody, indicating more problems among the latter group. Furthermore, girls living with their father had significantly higher levels of psychological symptoms, compared to boys in father custody. Similarly, boys living with their father were involved in more stealing behavior than girls in father custody. However, residence arrangement accounted for only a limited proportion of the variance in the adolescents' psychosocial problems, indicating large within-group variance and overlap between the different custody groups.
Yousef, Said; Eapen, Valsamma; Zoubeidi, Taoufik; Mabrouk, Abdelazim
2014-08-01
Television viewing and videogame use (TV/VG) appear to be associated with some childhood behavioral problems. There are no studies addressing this problem in the United Arab Emirates. One hundred ninety-seven school children (mean age, 8.7 ± 2.1 years) were assessed. Child Behavior Checklist (CBCL) subscale scores and socio-demographic characteristics were compared between children who were involved with TV/VG more than 2 hours/day and those involved less than 2 hours/day (the recommended upper limit by The American Academy of Pediatrics). Thirty-seven percent of children who were involved with TV/VG time of more than 2 hours/day scored significantly higher on CBCL syndrome scales of withdrawn, social problems, attention problems, delinquent behavior, aggressive behavior, internalizing problems, externalizing problems and the CBCL total scores compared with their counterparts. Moreover, these children were younger in birth order and had fewer siblings. After controlling for these confounders using logistic regression, we found that TV/VG time more than 2 hours/day was positively associated with withdrawn (p = 0.008), attention problem (p = 0.037), externalizing problems (p = 0.007), and CBCL total (p = 0.014). Involvement with TV/VG for more than 2 hours/day is associated with more childhood behavioral problems. Counteracting negative effects of the over-involvement with TV/VG in children requires increased parental awareness.
Impact induced depolarization of ferroelectric materials
NASA Astrophysics Data System (ADS)
Agrawal, Vinamra; Bhattacharya, Kaushik
2018-06-01
We study the large deformation dynamic behavior and the associated nonlinear electro-thermo-mechanical coupling exhibited by ferroelectric materials in adiabatic environments. This is motivated by a ferroelectric generator which involves pulsed power generation by loading the ferroelectric material with a shock, either by impact or a blast. Upon impact, a shock wave travels through the material inducing a ferroelectric to nonpolar phase transition giving rise to a large voltage difference in an open circuit situation or a large current in a closed circuit situation. In the first part of this paper, we provide a general continuum mechanical treatment of the situation assuming a sharp phase boundary that is possibly charged. We derive the governing laws, as well as the driving force acting on the phase boundary. In the second part, we use the derived equations and a particular constitutive relation that describes the ferroelectric to nonpolar phase transition to study a uniaxial plate impact problem. We develop a numerical method where the phase boundary is tracked but other discontinuities are captured using a finite volume method. We compare our results with experimental observations to find good agreement. Specifically, our model reproduces the observed exponential rise of charge as well as the resistance dependent Hugoniot. We conclude with a parameter study that provides detailed insight into various aspects of the problem.
ERIC Educational Resources Information Center
Logan-Greene, Patricia; Tennyson, Robert L.; Nurius, Paula S.; Borja, Sharon
2017-01-01
Background: Mental health problems are gaining attention among court-involved youth with emphasis on the role of childhood adversity, but assessment lags. Objective: The present study uses a commonly delivered assessment tool to examine mental health problems (current mental health problem, mental health interfered with probation goals, and…
Phenomenological theory of collective decision-making
NASA Astrophysics Data System (ADS)
Zafeiris, Anna; Koman, Zsombor; Mones, Enys; Vicsek, Tamás
2017-08-01
An essential task of groups is to provide efficient solutions for the complex problems they face. Indeed, considerable efforts have been devoted to the question of collective decision-making related to problems involving a single dominant feature. Here we introduce a quantitative formalism for finding the optimal distribution of the group members' competences in the more typical case when the underlying problem is complex, i.e., multidimensional. Thus, we consider teams that are aiming at obtaining the best possible answer to a problem having a number of independent sub-problems. Our approach is based on a generic scheme for the process of evaluating the proposed solutions (i.e., negotiation). We demonstrate that the best performing groups have at least one specialist for each sub-problem - but a far less intuitive result is that finding the optimal solution by the interacting group members requires that the specialists also have some insight into the sub-problems beyond their unique field(s). We present empirical results obtained by using a large-scale database of citations being in good agreement with the above theory. The framework we have developed can easily be adapted to a variety of realistic situations since taking into account the weights of the sub-problems, the opinions or the relations of the group is straightforward. Consequently, our method can be used in several contexts, especially when the optimal composition of a group of decision-makers is designed.
Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.
Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique
2016-01-01
Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.
Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems
Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique
2016-01-01
Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems. PMID:26949383
Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells
2014-01-01
A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajamani, S.
The leather industry is an important export-oriented industry in India, with more than 3,000 tanneries located in different clusters. Sodium sulfide, a toxic chemical, is used in large quantities to remove hair and excess flesh from hides and skins. Most of the sodium sulfide used in the process is discharged as waste in the effluent, which causes serious environmental problems. Reduction of sulfide in the effluent is generally achieved by means of chemicals in the pretreatment system, which involves aerobic mixing using large amounts of chemicals and high energy, and generating large volumes of sludge. A simple biotechnological system thatmore » uses the residual biosludge from the secondary settling tank was developed, and the commercial-scale application established that more than 90% of the sulfide could be reduced in the primary treatment system. In addition to the reduction of sulfide, foul smells, BOD and COD are reduced to a considerable level. 3 refs., 2 figs., 1 tab.« less
Artificial fluid properties for large-eddy simulation of compressible turbulent mixing
NASA Astrophysics Data System (ADS)
Cook, Andrew W.
2007-05-01
An alternative methodology is described for large-eddy simulation (LES) of flows involving shocks, turbulence, and mixing. In lieu of filtering the governing equations, it is postulated that the large-scale behavior of a LES fluid, i.e., a fluid with artificial properties, will be similar to that of a real fluid, provided the artificial properties obey certain constraints. The artificial properties consist of modifications to the shear viscosity, bulk viscosity, thermal conductivity, and species diffusivity of a fluid. The modified transport coefficients are designed to damp out high wavenumber modes, close to the resolution limit, without corrupting lower modes. Requisite behavior of the artificial properties is discussed and results are shown for a variety of test problems, each designed to exercise different aspects of the models. When combined with a tenth-order compact scheme, the overall method exhibits excellent resolution characteristics for turbulent mixing, while capturing shocks and material interfaces in a crisp fashion.
ASDF: A New Adaptable Data Format for Seismology Suitable for Large-Scale Workflows
NASA Astrophysics Data System (ADS)
Krischer, L.; Smith, J. A.; Spinuso, A.; Tromp, J.
2014-12-01
Increases in the amounts of available data as well as computational power opens the possibility to tackle ever larger and more complex problems. This comes with a slew of new problems, two of which are the need for a more efficient use of available resources and a sensible organization and storage of the data. Both need to be satisfied in order to properly scale a problem and both are frequent bottlenecks in large seismic inversions using ambient noise or more traditional techniques.We present recent developments and ideas regarding a new data format, named ASDF (Adaptable Seismic Data Format), for all branches of seismology aiding with the aforementioned problems. The key idea is to store all information necessary to fully understand a set of data in a single file. This enables the construction of self-explaining and exchangeable data sets facilitating collaboration on large-scale problems. We incorporate the existing metadata standards FDSN StationXML and QuakeML together with waveform and auxiliary data into a common container based on the HDF5 standard. A further critical component of the format is the storage of provenance information as an extension of W3C PROV, meaning information about the history of the data, assisting with the general problem of reproducibility.Applications of the proposed new format are numerous. In the context of seismic tomography it enables the full description and storage of synthetic waveforms including information about the used model, the solver, the parameters, and other variables that influenced the final waveforms. Furthermore, intermediate products like adjoint sources, cross correlations, and receiver functions can be described and most importantly exchanged with others.Usability and tool support is crucial for any new format to gain acceptance and we additionally present a fully functional implementation of this format based on Python and ObsPy. It offers a convenient way to discover and analyze data sets as well as making it trivial to execute processing functionality on modern high performance machines utilizing parallel I/O even for users not familiar with the details. An open-source development and design model as well as a wiki aim to involve the community.
Laird, Robert D.; Jordan, Kristi Y.; Dodge, Kenneth A.; Pettit, Gregory S.; Bates, John E.
2009-01-01
A longitudinal, prospective design was used to examine the roles of peer rejection in middle childhood and antisocial peer involvement in early adolescence in the development of adolescent externalizing behavior problems. Both early starter and late starter pathways were considered. Classroom sociometric interviews from ages 6 through 9 years, adolescent reports of peers' behavior at age 13 years, and parent, teacher, and adolescent self-reports of externalizing behavior problems from age 5 through 14 years were available for 400 adolescents. Results indicate that experiencing peer rejection in elementary school and greater involvement with antisocial peers in early adolescence are correlated but that these peer relationship experiences may represent two different pathways to adolescent externalizing behavior problems. Peer rejection experiences, but not involvement with antisocial peers, predict later externalizing behavior problems when controlling for stability in externalizing behavior. Externalizing problems were most common when rejection was experienced repeatedly. Early externalizing problems did not appear to moderate the relation between peer rejection and later problem behavior. Discussion highlights multiple pathways connecting externalizing behavior problems from early childhood through adolescence with peer relationship experiences in middle childhood and early adolescence. PMID:11393650
De Visscher, Alice; Vogel, Stephan E; Reishofer, Gernot; Hassler, Eva; Koschutnig, Karl; De Smedt, Bert; Grabner, Roland H
2018-05-15
In the development of math ability, a large variability of performance in solving simple arithmetic problems is observed and has not found a compelling explanation yet. One robust effect in simple multiplication facts is the problem size effect, indicating better performance for small problems compared to large ones. Recently, behavioral studies brought to light another effect in multiplication facts, the interference effect. That is, high interfering problems (receiving more proactive interference from previously learned problems) are more difficult to retrieve than low interfering problems (in terms of physical feature overlap, namely the digits, De Visscher and Noël, 2014). At the behavioral level, the sensitivity to the interference effect is shown to explain individual differences in the performance of solving multiplications in children as well as in adults. The aim of the present study was to investigate the individual differences in multiplication ability in relation to the neural interference effect and the neural problem size effect. To that end, we used a paradigm developed by De Visscher, Berens, et al. (2015) that contrasts the interference effect and the problem size effect in a multiplication verification task, during functional magnetic resonance imaging (fMRI) acquisition. Forty-two healthy adults, who showed high variability in an arithmetic fluency test, participated in our fMRI study. In order to control for the general reasoning level, the IQ was taken into account in the individual differences analyses. Our findings revealed a neural interference effect linked to individual differences in multiplication in the left inferior frontal gyrus, while controlling for the IQ. This interference effect in the left inferior frontal gyrus showed a negative relation with individual differences in arithmetic fluency, indicating a higher interference effect for low performers compared to high performers. This region is suggested in the literature to be involved in resolution of proactive interference. Besides, no correlation between the neural problem size effect and multiplication performance was found. This study supports the idea that the interference due to similarities/overlap of physical traits (the digits) is crucial in memorizing arithmetic facts and in determining individual differences in arithmetic. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Kirkhaug, Bente; Drugli, May Britt; Klockner, Christian A.; Morch, Willy-Tore
2013-01-01
The present study examined the factor structure of the Teacher Involvement Questionnaire (Involve-T) by means of exploratory factor analysis and examined the association between children's socio-emotional and behavioural problems and teacher-reported parental involvement in school, using structural equation modelling. The study was conducted with…
Navier-Stokes relaxation to sinh-Poisson states at finite Reynolds numbers
NASA Technical Reports Server (NTRS)
Montgomery, David; Shan, Xiaowen; Matthaeus, William H.
1993-01-01
A mathematical framework is proposed in which it seems possible to justify the computationally-observed relaxation of a two-dimensional Navier-Stokes fluid to a 'most probable', or maximum entropy, state. The relaxation occurs at large but finite Reynolds numbers, and involves substantial decay of higher-order ideal invariants such as enstrophy. A two-fluid formulation, involving interpenetrating positive and negative vorticity fluxes (continuous and square integrable) is developed, and is shown to be intimately related to the passive scalar decay problem. Increasing interpenetration of the two fluids corresponds to the decay of vorticity flux due to viscosity. It is demonstrated numerically that, in two dimensions, passive scalars decay rapidly, relative to mean-square vorticity (enstrophy). This observation provides a basis for assigning initial data to the two-fluid field variables.
Meyers, Kathleen; Kaynak, Övgü; Clements, Irene; Bresani, Elena; White, Tammy
2014-01-01
Adolescents involved with foster care are five times more likely to receive a drug dependence diagnosis when compared to adolescents in the general population. Prior research has shown that substance use is often hidden from providers, negating any chance for treatment and almost guaranteeing poor post-foster care outcomes. There are virtually no studies that examine the willingness (and its determinants) to foster youth with substance abuse problems. The current study conducted a nationally-distributed survey of 752 currently licensed foster care parents that assessed willingness to foster youth overall and by type of drug used, and possible correlates of this decision (e.g., home factors, system factors, and individual foster parent factors such as ratings of perceived difficulty in fostering this population). Overall, willingness to foster a youth involved with alcohol and other drugs (AOD) was contingent upon the types of drugs used. The odds that a parent would foster an AOD-involved youth were significantly increased by being licensed as a treatment foster home, having fostered an AOD-involved youth in the past, having AOD-specific training and past agency-support when needed, and self-efficacy with respect to positive impact. Surprisingly, when religion played a large part in the decision to foster any child, the odds of willingness to foster an AOD-involved youth dropped significantly. These results suggest that a large proportion of AOD-involved youth who find themselves in the foster care system will not have foster families willing to parent them, thereby forcing placement into a variety of congregate care facilities (e.g., residential treatment facilities, group homes). Specific ways in which the system can address these issues to improve placement and permanency efforts is provided. PMID:25878368
Studies in support of an SNM cutoff agreement: The PUREX exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanbro, W.D.; Libby, R.; Segal, J.
1995-07-01
On September 23, 1993, President Clinton, in a speech before the United Nations General Assembly, called for an international agreement banning the production of plutonium and highly enriched uranium for nuclear explosive purposes. A major element of any verification regime for such an agreement would probably involve inspections of reprocessing plants in Nuclear Nonproliferation Treaty weapons states. Many of these are large facilities built in the 1950s with no thought that they would be subject to international inspection. To learn about some of the problems that might be involved in the inspection of such large, old facilities, the Department ofmore » Energy, Office of Arms Control and Nonproliferation, sponsored a mock inspection exercise at the PUREX plant on the Hanford Site. This exercise examined a series of alternatives for inspections of the PUREX as a model for this type of facility at other locations. A series of conclusions were developed that can be used to guide the development of verification regimes for a cutoff agreement at reprocessing facilities.« less
Crash risk factors for interstate large trucks in North Carolina.
Teoh, Eric R; Carter, Daniel L; Smith, Sarah; McCartt, Anne T
2017-09-01
Provide an updated examination of risk factors for large truck involvements in crashes resulting in injury or death. A matched case-control study was conducted in North Carolina of large trucks operated by interstate carriers. Cases were defined as trucks involved in crashes resulting in fatal or non-fatal injury, and one control truck was matched on the basis of location, weekday, time of day, and truck type. The matched-pair odds ratio provided an estimate of the effect of various driver, vehicle, or carrier factors. Out-of-service (OOS) brake violations tripled the risk of crashing; any OOS vehicle defect increased crash risk by 362%. Higher historical crash rates (fatal, injury, or all crashes) of the carrier were associated with increased risk of crashing. Operating on a short-haul exemption increased crash risk by 383%. Antilock braking systems reduced crash risk by 65%. All of these results were statistically significant at the 95% confidence level. Other safety technologies also showed estimated benefits, although not statistically significant. With the exception of the finding that short-haul exemption is associated with increased crash risk, results largely bolster what is currently known about large truck crash risk and reinforce current enforcement practices. Results also suggest vehicle safety technologies can be important in lowering crash risk. This means that as safety technology continues to penetrate the fleet, whether from voluntary usage or government mandates, reductions in large truck crashes may be achieved. Practical application: Results imply that increased enforcement and use of crash avoidance technologies can improve the large truck crash problem. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
Data management in pattern recognition and image processing systems
NASA Technical Reports Server (NTRS)
Zobrist, A. L.; Bryant, N. A.
1976-01-01
Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.
Disruptive innovation: a new diagnosis for health care's "financial flu".
Kenagy, John W; Christensen, Clayton M
2002-05-01
Financial difficulties are threatening many healthcare organizations. To survive and target new markets of growth, strategic decision makers need to adapt existing business frameworks using the principle of disruptive innovation, which involves framing financial problems in a manner that incorporates changes in the marketplace and redefines solutions. Rather than emphasizing technological advances to capture shrinking, highly competitive markets, healthcare providers should consider the possibility of reaching largely untapped sources of revenue through a service line that is more convenient and less costly to consumers with less intensive needs.
The Design of an Ultra High Capacity Long Range Transport Aircraft
NASA Technical Reports Server (NTRS)
Weisshaar, Terrence A.; Bucci, Gregory; Hare, Angela; Szolwinski, Matthew
1993-01-01
This paper examines the design of a 650 passenger aircraft with 8000 nautical mile range to reduce seat mile cost and to reduce airport and airway congestion. This design effort involves the usual issues that require trades between technologies, but must also include consideration of: airport terminal facilities; passenger loading and unloading; and, defeating the 'square-cube' law to design large structures. This paper will review the long range ultra high capacity or megatransport design problem and the variety of solutions developed by senior student design teams at Purdue University.
Fate of Trace Metals in Anaerobic Digestion.
Fermoso, F G; van Hullebusch, E D; Guibaud, G; Collins, G; Svensson, B H; Carliell-Marquet, C; Vink, J P M; Esposito, G; Frunzo, L
2015-01-01
A challenging, and largely uncharted, area of research in the field of anaerobic digestion science and technology is in understanding the roles of trace metals in enabling biogas production. This is a major knowledge gap and a multifaceted problem involving metal chemistry; physical interactions of metal and solids; microbiology; and technology optimization. Moreover, the fate of trace metals, and the chemical speciation and transport of trace metals in environments--often agricultural lands receiving discharge waters from anaerobic digestion processes--simultaneously represents challenges for environmental protection and opportunities to close process loops in anaerobic digestion.
Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction
NASA Technical Reports Server (NTRS)
Padovan, Joseph; Krishna, Lala; Gute, Douglas
1997-01-01
Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.
Gillis, Artha J; Bath, Eraka
2016-01-01
There is a large proportion of minority youth involved in the juvenile justice system. Disproportionate minority contact (DMC) occurs when the proportion of any ethnic group is higher at any given stage in the juvenile justice process than the proportion of this group in the general population. There are several theories explaining the presence and persistence of DMC. This article reviews the history of DMC and the theories and implications of this problem. It discusses several targets for interventions designed to reduce DMC and offer resources in this area. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA management of the Space Shuttle Program
NASA Technical Reports Server (NTRS)
Peters, F.
1975-01-01
The management system and management technology described have been developed to meet stringent cost and schedule constraints of the Space Shuttle Program. Management of resources available to this program requires control and motivation of a large number of efficient creative personnel trained in various technical specialties. This must be done while keeping track of numerous parallel, yet interdependent activities involving different functions, organizations, and products all moving together in accordance with intricate plans for budgets, schedules, performance, and interaction. Some techniques developed to identify problems at an early stage and seek immediate solutions are examined.
Design of joint source/channel coders
NASA Technical Reports Server (NTRS)
1991-01-01
The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
Application of ERTS-1 imagery in the Vermont-New York dispute over pollution of Lake Champlain
NASA Technical Reports Server (NTRS)
Lind, A. O. (Principal Investigator)
1973-01-01
The author has identified the following significant results. ERTS-1 imagery and a composite map derived from ERTS-1 imagery were presented as evidence in a U.S. Supreme Court case involving the pollution of an interstate water body (Lake Champlain). A pollution problem generated by a large paper mill forms the basis of the suit (Vermont vs. International Paper Co. and State of New York) and ERTS-1 imagery shows the effluent pattern on the lake surface as extending into Vermont during three different times.
Morgenstern, Jon; Hogue, Aaron; Dasaro, Christopher; Kuerbis, Alexis; Dauber, Sarah
2008-07-01
This study examined barriers to employability, motivation to abstain from substances and to work, and involvement in multiple service systems among male and female welfare applicants with alcohol- and drug-use problems. A representative sample (N= 1,431) of all persons applying for public assistance who screened positive for substance involvement over a 2-year period in a large urban county were recruited in welfare offices. Legal, education, general health, mental health, employment, housing, and child welfare barriers to employability were assessed, as were readiness to abstain from substance use and readiness to work. Only 1 in 20 participants reported no barrier other than substance use, whereas 70% reported at least two other barriers and 40% reported three or more. Moreover, 70% of participants experienced at least one additional barrier classified as "severe" and 30% experienced two or more. The number and type of barriers differed by gender. Latent class analysis revealed four main barriers-plus-readiness profiles among participants: (1) multiple barriers, (2) work experienced, (3) criminal justice, and (4) unstable housing. Findings suggest that comprehensive coordination among social service systems is needed to address the complex problems of low-income Americans with substance-use disorders. Classifying applicants based on barriers and readiness is a promising approach to developing innovative welfare programs to serve the diverse needs of men and women with substance-related problems.
Rank-k modification methods for recursive least squares problems
NASA Astrophysics Data System (ADS)
Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam
1994-09-01
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.
How doctors learn: the role of clinical problems across the medical school-to-practice continuum.
Slotnick, H B
1996-01-01
The author proposes a theory of how physicians learn that uses clinical problem solving as its central feature. His theory, which integrates insights from Maslow, Schön, Norman, and others, claims that physicians-in-training and practicing physicians learn largely by deriving insights from clinical experience. These insights allow the learner to solve future problems and thereby address the learner's basic human needs for security, affiliation, and self-esteem. Ensuring that students gain such insights means that the proper roles of the teacher are (1) to select problems for students to solve and offer guidance on how to solve them, and (2) to serve as a role model of how to reflect on the problem, its solution, and the solution's effectiveness. Three principles guide instruction within its framework for learning: (1) learners, whether physicians-in-training or practicing physicians, seek to solve problems they recognize they have; (2) learners want to be involved in their own learning; and (3) instruction must both be time-efficient and also demonstrate the range of ways in which students can apply what they learn. The author concludes by applying the theory to an aspect of undergraduate education and to the general process of continuing medical education.
ERIC Educational Resources Information Center
Wareham, Todd
2017-01-01
In human problem solving, there is a wide variation between individuals in problem solution time and success rate, regardless of whether or not this problem solving involves insight. In this paper, we apply computational and parameterized analysis to a plausible formalization of extended representation change theory (eRCT), an integration of…
NASA Astrophysics Data System (ADS)
Strack, O. D. L.
2018-02-01
We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.
On the use of evidence in humanitarian logistics research.
Pedraza-Martinez, Alfonso J; Stapleton, Orla; Van Wassenhove, Luk N
2013-07-01
This paper presents the reflections of the authors on the differences between the language and the approach of practitioners and academics to humanitarian logistics problems. Based on a long-term project on fleet management in the humanitarian sector, involving both large international humanitarian organisations and academics, it discusses how differences in language and approach to such problems may create a lacuna that impedes trust. In addition, the paper provides insights into how academic research evidence adapted to practitioner language can be used to bridge the gap. When it is communicated appropriately, evidence strengthens trust between practitioners and academics, which is critical for long-term projects. Once practitioners understand the main trade-offs included in academic research, they can supply valuable feedback to motivate new academic research. Novel research problems promote innovation in the use of traditional academic methods, which should result in a win-win situation: relevant solutions for practice and advances in academic knowledge. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
The effects of maintaining temperature in annealing heat treatment for an FSWed 6061-T6 Al alloy.
Lee, Seung-Jun; Han, Min-Su; Kim, Seong-Jong
2013-08-01
The technological development of all kinds of lightweight transportation devices including vehicles, aircraft, ships, etc. has progressed markedly with the demand for energy saving and environmental protection. Aluminum alloy is in the spotlight as it is a suitable environmentally friendly material. However, deformation is a major problem during the welding process because aluminum alloy has a large thermal expansion coefficient. In addition, it is known that its corrosion resistance is excellent; nevertheless, in practice, considerable corrosion is generated and this is a major problem. To solve this problem, the friction stir welding (FSW) technology is applied extensively at various industrial fields as a new welding technique. This method involves a process in which materials are joined by frictional heat and physical force. Therefore, we evaluated improvements in mechanical properties and corrosion resistance through annealing heat treatment after FSW. The electrochemical experiment did not show a significant difference. However, the microstructure observation showed defectless, fine crystal particles, indicating excellent properties at 200-225°C.
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Improving patient safety: lessons from rock climbing.
Robertson, Nic
2012-02-01
How to improve patient safety remains an intractable problem, despite large investment and some successes. Academics have argued that the root of the problem is a lack of a comprehensive 'safety culture' in hospitals. Other safety-critical industries such as commercial aviation invest heavily in staff training to develop such a culture, but comparable programmes are almost entirely absent from the health care sector. In rock climbing and many other dangerous activities, the 'buddy system' is used to ensure that safety systems are adhered to despite adverse circumstances. This system involves two or more people using simple checks and clear communication to prevent problems causing harm. Using this system as an example could provide a simple, original and entertaining way of introducing medical students to the idea that human factors are central to ensuring patient safety. Teaching the buddy system may improve understanding and acceptance of other patient safety initiatives, and could also be used by junior doctors as a tool to improve the safety of their practice. © Blackwell Publishing Ltd 2012.
Lee, J D; Tofighi, B; McDonald, R; Campbell, A; Hu, M C; Nunes, E
2017-12-01
The acceptability, feasibility and effectiveness of web-based interventions among criminal justice involved populations are understudied. This study is a secondary analysis of baseline characteristics associated with criminal justice system (CJS) status as treatment outcome moderators among participants enrolling in a large randomized trial of a web-based psychosocial intervention (Therapeutic Education System [TES]) as part of outpatient addiction treatment. We compared demographic and clinical characteristics, TES participation rates, and the trial's two co-primary outcomes, end of treatment abstinence and treatment retention, by self-reported CJS status at baseline: 1) CJS-mandated to community treatment (CJS-mandated), 2) CJS-recommended to treatment (CJS-recommended), 3) no CJS treatment mandate (CJS-none). CJS-mandated (n = 107) and CJS-recommended (n = 69) participants differed from CJS-none (n = 331) at baseline: CJS-mandated were significantly more likely to be male, uninsured, report cannabis as the primary drug problem, report fewer days of drug use at baseline, screen negative for depression, and score lower for psychological distress and higher on physical health status; CJS-recommended were younger, more likely single, less likely to report no regular Internet use, and to report cannabis as the primary drug problem. Both CJS-involved (CJS -recommended and -mandated) groups were more likely to have been recently incarcerated. Among participants randomized to the TES arm, module completion was similar across the CJS subgroups. A three-way interaction of treatment, baseline abstinence and CJS status showed no associations with the study's primary abstinence outcome. Overall, CJS-involved participants in this study tended to be young, male, and in treatment for a primary cannabis problem. The feasibility and effectiveness of the web-based psychosocial intervention, TES, did not vary by CJS-mandated or CJS-recommended participants compared to CJS-none. Web-based counseling interventions may be effective interventions as US public safety policies begin to emphasize supervised community drug treatment over incarceration.
Problem solving therapy - use and effectiveness in general practice.
Pierce, David
2012-09-01
Problem solving therapy (PST) is one of the focused psychological strategies supported by Medicare for use by appropriately trained general practitioners. This article reviews the evidence base for PST and its use in the general practice setting. Problem solving therapy involves patients learning or reactivating problem solving skills. These skills can then be applied to specific life problems associated with psychological and somatic symptoms. Problem solving therapy is suitable for use in general practice for patients experiencing common mental health conditions and has been shown to be as effective in the treatment of depression as antidepressants. Problem solving therapy involves a series of sequential stages. The clinician assists the patient to develop new empowering skills, and then supports them to work through the stages of therapy to determine and implement the solution selected by the patient. Many experienced GPs will identify their own existing problem solving skills. Learning about PST may involve refining and focusing these skills.
Colorless top partners, a 125 GeV Higgs boson, and the limits on naturalness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdman, Gustavo; Chacko, Zackaria; Harnik, Roni
2015-03-09
Theories of physics beyond the Standard Model that address the hierarchy problem generally involve top partners, new particles that cancel the quadratic divergences associated with the Yukawa coupling of the Higgs to the top quark. With extensions of the Standard Model that involve new colored particles coming under strain from collider searches, scenarios in which the top partners carry no charge under the strong interactions have become increasingly compelling. Although elusive for direct searches, these theories predict modified couplings of the Higgs boson to the Standard Model particles. This results in corrections to the Higgs production and decay rates that can be detected at the Large Hadron Collider (LHC) provided the top partners are sufficiently light, and the theory correspondingly natural. In this paper we consider three theories that address the little hierarchy problem and involve colorless top partners, specifically the Mirror Twin Higgs, Folded Supersymmetry, and the Quirky Little Higgs. For each model we investigate the current and future bounds on the top partners, and the corresponding limits on naturalness, that can be obtained from the Higgs program at the LHC. Here, we conclude that the LHC will not be able to strongly disfavor naturalness, with mild tuning at the level of about one part in ten remaining allowed even with 3000 fbmore » $$^{-1}$$ of data at 14 TeV.« less
Sturrock, Sarah; Hodes, Matthew
2016-12-01
In low- and middle-income countries, large numbers of children are involved in work. Whilst studies have shown that child labour may be harmful to children's physical health, little is known about child labour's effects on mental health. It is important to understand the relationship between work and mental health problems during childhood, and identify possible risk factors for poorer mental health. A systematic literature review was conducted. Published papers in any language that compared the mental health of children (<18 years) who had been exposed to work with those who had not been exposed to work were included. Twelve published observational studies on the association between child labour and general psychopathology, internalising and externalising problems were identified. Child labour was found to be strongly associated with poor mental health outcomes in seven studies. More significant associations were found between child labour and internalising problems than externalising problems. The burden of poor mental health as a result of child labour is significant given the numbers of children in work. Risk factors for poorer mental health were involvement in domestic labour, younger age, and greater intensity of work, which could be due to the potential of child labour to cause isolation, low self-esteem, and perception of an external locus of control. The risk factors suggested by this review will have implications for policy makers. Additional research is needed in low-income countries, risk factors and also into the potential psychological benefits of low levels of work.
Exclusions for resolving urban badger damage problems: outcomes and consequences.
Ward, Alastair I; Finney, Jason K; Beatham, Sarah E; Delahay, Richard J; Robertson, Peter A; Cowan, David P
2016-01-01
Increasing urbanisation and growth of many wild animal populations can result in a greater frequency of human-wildlife conflicts. However, traditional lethal methods of wildlife control are becoming less favoured than non-lethal approaches, particularly when problems involve charismatic species in urban areas. Eurasian badgers ( Meles meles ) excavate subterranean burrow systems (setts), which can become large and complex. Larger setts within which breeding takes place and that are in constant use are known as main setts. Smaller, less frequently occupied setts may also exist within the social group's range. When setts are excavated in urban environments they can undermine built structures and can limit or prevent safe use of the area by people. The most common approach to resolving these problems in the UK is to exclude badgers from the problem sett, but exclusions suffer a variable success rate. We studied 32 lawful cases of badger exclusions using one-way gates throughout England to evaluate conditions under which attempts to exclude badgers from their setts in urban environments were successful. We aimed to identify ways of modifying practices to improve the chances of success. Twenty of the 32 exclusion attempts were successful, but success was significantly less likely if a main sett was to be excluded in comparison with another type of sett and if vegetation was not completely removed from the sett surface prior to exclusion attempts. We recommend that during exclusions all vegetation is removed from the site, regardless of what type of sett is involved, and that successful exclusion of badgers from a main sett might require substantially more effort than other types of sett.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
MEvoLib v1.0: the first molecular evolution library for Python.
Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo
2016-10-28
Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.
Solving the problem of Trans-Genomic Query with alignment tables.
Parker, Douglass Stott; Hsiao, Ruey-Lung; Xing, Yi; Resch, Alissa M; Lee, Christopher J
2008-01-01
The trans-genomic query (TGQ) problem--enabling the free query of biological information, even across genomes--is a central challenge facing bioinformatics. Solutions to this problem can alter the nature of the field, moving it beyond the jungle of data integration and expanding the number and scope of questions that can be answered. An alignment table is a binary relationship on locations (sequence segments). An important special case of alignment tables are hit tables ? tables of pairs of highly similar segments produced by alignment tools like BLAST. However, alignment tables also include general binary relationships, and can represent any useful connection between sequence locations. They can be curated, and provide a high-quality queryable backbone of connections between biological information. Alignment tables thus can be a natural foundation for TGQ, as they permit a central part of the TGQ problem to be reduced to purely technical problems involving tables of locations.Key challenges in implementing alignment tables include efficient representation and indexing of sequence locations. We define a location datatype that can be incorporated naturally into common off-the-shelf database systems. We also describe an implementation of alignment tables in BLASTGRES, an extension of the open-source POSTGRESQL database system that provides indexing and operators on locations required for querying alignment tables. This paper also reviews several successful large-scale applications of alignment tables for Trans-Genomic Query. Tables with millions of alignments have been used in queries about alternative splicing, an area of genomic analysis concerning the way in which a single gene can yield multiple transcripts. Comparative genomics is a large potential application area for TGQ and alignment tables.
Yen, Cheng-Fang; Lin, I-Mei; Liu, Tai-Ling; Hu, Huei-Fan; Cheng, Chung-Ping
2014-08-01
This study aimed to examine the mediating effects of depression and anxiety on the relationships of bullying victimization and perpetration with pain among adolescents in Taiwan. A total of 4976 students of junior and senior high schools completed the questionnaires. Bullying victimization and perpetration, pain problems, depression, and anxiety were assessed. The mediating effects of depression and anxiety on the relationship between bullying involvement and pain problems and the moderating effects of sex on the medicating roles of depression and anxiety were examined by structural equation model. Both depression and anxiety were significant mediators of the relationship between bullying victimization and pain problems among adolescents. Depression was also a significant mediator of the relationship between bullying perpetration and pain problems among adolescents. Sex had no moderating effect on the mediating role of depression/anxiety on the association between bullying involvement and pain problems. Medical and educational professionals should survey and intervene in depression and anxiety when managing pain problems among adolescents involved in bullying. Copyright © 2014 Elsevier Inc. All rights reserved.
Interactive Engagement in the Large Lecture Environment
NASA Astrophysics Data System (ADS)
Dubson, Michael
Watching a great physics lecture is like watching a great piano performance. It is can be inspiring, and it can give you insights, but it doesn't teach you to play piano. Students don't learn physics by watching expert professors perform at the board; they can only learn by practicing it themselves. Learning physics involves high-level thinking like formulating problem-solving strategies or explaining concepts to other humans. Learning is always messy, involving struggle, trial-and-error, and paradigm shifts. That learning struggle cannot be overcome with a more eloquent lecture; it can only be surmounted with prolonged, determined, active engagement by the student. I will demonstrate some techniques of active engagement, including clicker questions and in-class activities, which are designed to activate the student's higher-level thinking, get them actively involved in their learning, and start them on the path of productive struggle. These techniques are scalable; they work in classrooms with 30 or 300 students. This talk about audience participation will involve audience participation, so please put down your phone and be ready for a challenge.
District nurses' involvement in mental health: an exploratory survey.
Lee, Soo; Knight, Denise
2006-04-01
This article reports on a survey of district nurses' involvement in mental health interventions in one county. Seventy-nine questionnaires were sent and 46 were returned. Descriptive analysis was carried out using statistical software. The DNs reported encountering a wide range of mental health issues and interventions in practice: dementia, anxiety and depression featured highly. Over half (55%) of the respondents reported involvement in bereavement counselling, and 28% and 23% of respondents reported encountering anxiety management, and problem solving and alcohol advice respectively. A large proportion, however, reported no involvement in mental health interventions. Among the psychiatric professionals, district nurses tended to have most frequent contacts with social workers. GPs were the most likely person to whom DNs made referrals, followed by community psychiatric nurses. Despite the apparent awareness of the values of psychosocial interventions, DNs were equally influenced by the medical model of treatment. In order to realize the potential contribution of district nurses in mental health interventions, there is a need for primary care teams to foster a closer working relationship with mental health specialist services.
ERIC Educational Resources Information Center
McKellar, Susan; Coggans, Niall
1997-01-01
Surveyed social agencies' awareness of possible developmental problems of alcohol and substance abusers' children, extent to which agency felt it could deal with the problem involving the family, and development of services for children of substance abusers. Found that many agency workers considered involvement in family problems to be part of…
Can compactifications solve the cosmological constant problem?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali
2016-06-30
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
Determination of optimal self-drive tourism route using the orienteering problem method
NASA Astrophysics Data System (ADS)
Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah
2013-04-01
This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.
NASA Technical Reports Server (NTRS)
Mclain, A. G.; Rao, C. S. R.
1976-01-01
A hybrid chemical kinetic computer program was assembled which provides a rapid solution to problems involving flowing or static, chemically reacting, gas mixtures. The computer program uses existing subroutines for problem setup, initialization, and preliminary calculations and incorporates a stiff ordinary differential equation solution technique. A number of check cases were recomputed with the hybrid program and the results were almost identical to those previously obtained. The computational time saving was demonstrated with a propane-oxygen-argon shock tube combustion problem involving 31 chemical species and 64 reactions. Information is presented to enable potential users to prepare an input data deck for the calculation of a problem.
Games and gambling involvement among casino patrons.
LaPlante, Debi A; Afifi, Tracie O; Shaffer, Howard J
2013-06-01
A growing literature is addressing the nature of the relationships among gambling activity, gambling involvement, and gambling-related problems. This research suggests that among the general population, compared to playing any specific game, gambling involvement is a better predictor of gambling-related problems. To date, researchers have not examined these relationships among casino patrons, a population that differs from the general population in a variety of important ways. A survey of 1160 casino patrons at two Las Vegas resort casinos allowed us to determine relationships between the games that patrons played during the 12 months before their casino visit, the games that patrons played during their casino visit, and patrons' self-perceived history of gambling-related problems. Results indicate that playing specific gambling games onsite predicted (i.e., statistically significant odds ratios ranging from .5 to 4.51) self-perceived gambling-related problems. However, after controlling for involvement, operationally defined as the number of games played during the current casino visit and self-reported gambling frequency during the past 12 months, the relationships between games and gambling-related problems disappeared or were attenuated (i.e., odds ratios no longer statistically significant). These results extend the burgeoning literature related to gambling involvement and its relationship to gambling-related problems.
Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors
Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...
2016-10-01
Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less
Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Lu, Roger; Bakosi, Jozsef
Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
Initial conditions and modeling for simulations of shock driven turbulent material mixing
Grinstein, Fernando F.
2016-11-17
Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less
A process improvement model for software verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John; Sabolish, George
1994-01-01
We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.
NASA Technical Reports Server (NTRS)
Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.
1976-01-01
A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.
A Cognitive Model for Problem Solving in Computer Science
ERIC Educational Resources Information Center
Parham, Jennifer R.
2009-01-01
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…
Problem Solving through Paper Folding
ERIC Educational Resources Information Center
Wares, Arsalan
2014-01-01
The purpose of this article is to describe a couple of challenging mathematical problems that involve paper folding. These problem-solving tasks can be used to foster geometric and algebraic thinking among students. The context of paper folding makes some of the abstract mathematical ideas involved relatively concrete. When implemented…
The link between drinking and gambling among undergraduate university students.
Hodgins, David C; Racicot, Stephanie
2013-09-01
The purpose of this research was to explore different aspects of the link between alcohol use and gambling among undergraduate university students (N = 121). Potential aspects of the link examined included level of involvement in each behavior, consequences, motives for involvement, and impaired control over involvement. Results confirmed that drinking and gambling among university students are associated, consistent with the expectations of a problem syndrome model. The strongest link was between general dimensions of problematic involvement for both behaviors. Students who drink to cope and have other indicators of alcohol problems are more likely to gamble to cope, gamble to win money, and have higher gambling involvement and gambling-related problems. However, the salience of drinking and gambling to cope in this relationship is an interesting finding that needs further exploration and extension to other problem behaviors. PsycINFO Database Record (c) 2013 APA, all rights reserved.
LaPlante, Debi A; Nelson, Sarah E; Gray, Heather M
2014-06-01
The "involvement effect" refers to the finding that controlling for gambling involvement often reduces or eliminates frequently observed game-specific associations with problem gambling. In other words, broader patterns of gambling behavior, particularly the number of types of games played over a defined period, contribute more to problem gambling than playing specific games (e.g., lottery, casino, Internet gambling). This study extends this burgeoning area of inquiry in three primary ways. First, it tests independently and simultaneously the predictive power of two gambling patterns: breadth involvement (i.e., the number of games an individual plays) and depth involvement (i.e., the number of days an individual plays). Second, it includes the first involvement analyses of actual betting activity records that are associated with clinical screening information. Third, it evaluates and compares the linearity of breadth and depth effects. We conducted analyses of the actual gambling activity of 1,440 subscribers to the bwin.party gambling service who completed an online gambling disorder screen. In all, 11 of the 16 games we examined had a significant univariate association with a positive screen for gambling disorder. However, after controlling for breadth involvement, only Live Action Internet sports betting retained a significant relationship with potential gambling-related problems. Depth involvement, though significantly related to potential problems, did not impact game-based gambling disorder associations as much as breadth involvement. Finally, breadth effects appeared steeply linear, with a slight quadratic component manifesting beyond four games played, but depth effects appeared to have a strong linear component and a slight cubic component.
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
The Cloud-Based Integrated Data Viewer (IDV)
NASA Astrophysics Data System (ADS)
Fisher, Ward
2015-04-01
Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While there are a suite of tools and methodologies used in traditional software engineering environments to mitigate this issue, they are typically ignored by developers lacking a background in software engineering. The result is a large body of software which is simultaneously critical and difficult to maintain. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. The advent of cloud computing has provided a solution to this problem, which was not previously practical on a large scale; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. Through application streaming we are able to bring the same visualization to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be. Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved. We will also discuss the differences between local software and software-as-a-service.
Visual Analytics for Power Grid Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Huang, Zhenyu; Chen, Yousu
2014-01-20
Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
The Forced Marriage of Minors: A Neglected Form of Child Abuse.
Kopelman, Loretta M
2016-03-01
The forced marriage of minors is child abuse, consequently duties exist to stop them. Yet over 14 million forced marriages of minors occur annually in developing countries. The American Bar Association (ABA) concludes that the problem in the US is significant, widespread but largely ignored, and that few US laws protect minors from forced marriages. Although their best chance of rescue often involves visits to health care providers, US providers show little awareness of this growing problem. Strategies discussed to stop forced marriages include recommendations from the UN, the ABA, and the UK. The author anticipates and responds to criticisms that first, no duty to intervene exists without better laws and practice guidelines; and second, that such marriages are not child abuse in traditions where parental rights or familism allegedly justify them. © 2016 American Society of Law, Medicine & Ethics.
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
Hybrid Quantum-Classical Approach to Quantum Optimal Control.
Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu
2017-04-14
A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
ERIC Educational Resources Information Center
Chan, David W.
2003-01-01
A study involving 639 Chinese gifted students (grades 4-12) found that intense emotional involvement, perfectionism, unchallenging schoolwork, multipotentiality, and parental expectations were relatively common adjustment problems. Poor interpersonal relationships were not. Results also indicated that conventional intelligence increased…
Rocks in a Box: A Three-Point Problem.
ERIC Educational Resources Information Center
Leyden, Michael B.
1981-01-01
Describes a simulation drilling core activity involving the use of a physical model from which students gather data and solve a three-point problem to determine the strike and dip of a buried stratum. Includes descriptions of model making, data plots, and additional problems involving strike and dip. (DS)
Proportional Reasoning in the Learning of Chemistry: Levels of Complexity
ERIC Educational Resources Information Center
Ramful, Ajay; Narod, Fawzia Bibi
2014-01-01
This interdisciplinary study sketches the ways in which proportional reasoning is involved in the solution of chemistry problems, more specifically, problems involving quantities in chemical reactions (commonly referred to as stoichiometry problems). By building on the expertise of both mathematics and chemistry education research, the present…
An Action-Research Program for Increasing Employee Involvement in Problem Solving.
ERIC Educational Resources Information Center
Pasmore, William; Friedlander, Frank
1982-01-01
Describes the use of participative action research to solve problems of work-related employee injuries in a rural midwestern electronics plant by increasing employee involvement. The researchers established an employee problem-solving group that interviewed and surveyed workers, analyzed the results, and suggested new work arrangements. (Author/RW)
Seismic analysis of a LNG storage tank isolated by a multiple friction pendulum system
NASA Astrophysics Data System (ADS)
Zhang, Ruifu; Weng, Dagen; Ren, Xiaosong
2011-06-01
The seismic response of an isolated vertical, cylindrical, extra-large liquefied natural gas (LNG) tank by a multiple friction pendulum system (MFPS) is analyzed. Most of the extra-large LNG tanks have a fundamental frequency which involves a range of resonance of most earthquake ground motions. It is an effective way to decrease the response of an isolation system used for extra-large LNG storage tanks under a strong earthquake. However, it is difficult to implement in practice with common isolation bearings due to issues such as low temperature, soft site and other severe environment factors. The extra-large LNG tank isolated by a MFPS is presented in this study to address these problems. A MFPS is appropriate for large displacements induced by earthquakes with long predominant periods. A simplified finite element model by Malhotra and Dunkerley is used to determine the usefulness of the isolation system. Data reported and statistically sorted include pile shear, wave height, impulsive acceleration, convective acceleration and outer tank acceleration. The results show that the isolation system has excellent adaptability for different liquid levels and is very effective in controlling the seismic response of extra-large LNG tanks.
Differential Effects of Counselor Self-Disclosure, Self-Involving Statements, and Interpretation.
ERIC Educational Resources Information Center
Dowd, E. Thomas; Boroto, Daniel R.
1982-01-01
College students (N=217) rated counselor characteristics after viewing a simulated counseling session ending with the counselor summarizing the session, disclosing a past personal problem, disclosing a present personal problem, engaging in self-involving statements, or offering dynamic interpretations. Self-disclosure and self-involving statements…
Analytical derivation: An epistemic game for solving mathematically based physics problems
NASA Astrophysics Data System (ADS)
Bajracharya, Rabindra R.; Thompson, John R.
2016-06-01
Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the analytical derivation game. This game involves deriving an equation through symbolic manipulations and routine mathematical operations, usually without any physical interpretation of the processes. This game often creates cognitive obstacles in students, preventing them from using alternative resources or better approaches during problem solving. We conducted hour-long, semi-structured, individual interviews with fourteen introductory physics students. Students were asked to solve four "pseudophysics" problems containing algebraic and graphical representations. The problems required the application of the fundamental theorem of calculus (FTC), which is one of the most frequently used mathematical concepts in physics problem solving. We show that the analytical derivation game is necessary, but not sufficient, to solve mathematically based physics problems, specifically those involving graphical representations.
An experimental study of the noise generating mechanisms in supersonic jets
NASA Technical Reports Server (NTRS)
Mclaughlin, D. K.
1979-01-01
Flow fluctuation measurements with normal and X-wire hot-wire probes and acoustic measurements with a traversing condenser microphone were carried out in small air jets in the Mach number range from M = 0.9 to 2.5. One of the most successful studies involved a moderate Reynolds number M = 2.1 jet. The large scale turbulence properties in the jet, and the noise radiation were characterized. A parallel study involved similar measurements on a low Reynolds number M = 0.9 jet. These measurements show that there are important differences in the noise generation process of the M = 0.9 jet in comparison with low supersonic Mach number (M = 1.4) jets. Problems encounted while performing X-wire measurements in low Reynolds number jets of M = 2.1 and 2.5, and in installing a vacuum pump are discussed.
[Pigeon sport and animal rights].
Warzecha, M
2007-03-01
To begin, a short overview of the organization and the realization of the racing pigeon sport. Some physiological facts, relevant to racing pigeons, will be touched on. Lastly, a focus on the flights, their completion and the problems involved with the, in some cases, high number of lost pigeons. The German Club of Pigeon Breeders, has made improvements but, it is certainly not enough. The topic of "City Pigeons" will be briefed. The final part deals with pertinent animal rights issues, causes of mishaps, and some rectifying possibilities, which are available to the government veterinarian. Special emphasis will be placed on the international uniformity of this issue. The lecture should prove that there is a need for every government veterinarian to become actively involved, because the described problematic has a major effect on a very large number of animals.
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
Penders, Bart; Vos, Rein; Horstman, Klasien
2009-11-01
Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.
ERIC Educational Resources Information Center
Lin, Shih-Yin; Singh, Chandralekha
2015-01-01
It is well known that introductory physics students often have alternative conceptions that are inconsistent with established physical principles and concepts. Invoking alternative conceptions in the quantitative problem-solving process can derail the entire process. In order to help students solve quantitative problems involving strong…
Analytical Derivation: An Epistemic Game for Solving Mathematically Based Physics Problems
ERIC Educational Resources Information Center
Bajracharya, Rabindra R.; Thompson, John R.
2016-01-01
Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the "analytical derivation" game. This game involves deriving an…
Applied mathematical problems in modern electromagnetics
NASA Astrophysics Data System (ADS)
Kriegsman, Gregory
1994-05-01
We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
Optimal Chunking of Large Multidimensional Arrays for Data Warehousing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron
2008-02-15
Very large multidimensional arrays are commonly used in data intensive scientific computations as well as on-line analytical processingapplications referred to as MOLAP. The storage organization of such arrays on disks is done by partitioning the large global array into fixed size sub-arrays called chunks or tiles that form the units of data transfer between disk and memory. Typical queries involve the retrieval of sub-arrays in a manner that access all chunks that overlap the query results. An important metric of the storage efficiency is the expected number of chunks retrieved over all such queries. The question that immediately arises is"whatmore » shapes of array chunks give the minimum expected number of chunks over a query workload?" The problem of optimal chunking was first introduced by Sarawagi and Stonebraker who gave an approximate solution. In this paper we develop exact mathematical models of the problem and provide exact solutions using steepest descent and geometric programming methods. Experimental results, using synthetic and real life workloads, show that our solutions are consistently within than 2.0percent of the true number of chunks retrieved for any number of dimensions. In contrast, the approximate solution of Sarawagi and Stonebraker can deviate considerably from the true result with increasing number of dimensions and also may lead to suboptimal chunk shapes.« less
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
Neural Activity When People Solve Verbal Problems with Insight
Bowden, Edward M; Haberman, Jason; Frymiare, Jennifer L; Arambel-Liu, Stella; Greenblatt, Richard; Reber, Paul J
2004-01-01
People sometimes solve problems with a unique process called insight, accompanied by an “Aha!” experience. It has long been unclear whether different cognitive and neural processes lead to insight versus noninsight solutions, or if solutions differ only in subsequent subjective feeling. Recent behavioral studies indicate distinct patterns of performance and suggest differential hemispheric involvement for insight and noninsight solutions. Subjects solved verbal problems, and after each correct solution indicated whether they solved with or without insight. We observed two objective neural correlates of insight. Functional magnetic resonance imaging (Experiment 1) revealed increased activity in the right hemisphere anterior superior temporal gyrus for insight relative to noninsight solutions. The same region was active during initial solving efforts. Scalp electroencephalogram recordings (Experiment 2) revealed a sudden burst of high-frequency (gamma-band) neural activity in the same area beginning 0.3 s prior to insight solutions. This right anterior temporal area is associated with making connections across distantly related information during comprehension. Although all problem solving relies on a largely shared cortical network, the sudden flash of insight occurs when solvers engage distinct neural and cognitive processes that allow them to see connections that previously eluded them. PMID:15094802
Husky, Mathilde M; Michel, Grégory; Richard, Jean-Baptiste; Guignard, Romain; Beck, François
2015-06-01
The objectives of the present study are to describe gender differences in factors associated with moderate risk and problem gambling. Data were extracted from the 2010 Health Barometer, a large survey on a representative sample of the general population aged 15-85 years living in France (n=27,653), carried out by the National Institute for Health Promotion and Health Education. Data were collected between October 2009 and July 2010. A computer-assisted telephone interview system was used. The findings indicate that men are three times more likely to experience problems with gambling. Men and women have different patterns of gambling activities. Men were more involved with Rapido, internet gambling, sports and racetrack betting, poker, and casino tables, whereas women gambled more often on scratch games. Both men and women engaging in immediate reward games were significantly more likely to experience difficulties with gambling. This association, however, was stronger in women. Furthermore, suicidal ideation and behaviors were more likely to be associated with gambling problems in women as compared to men. The study underscores the importance of considering gender-related differences in the study of gambling behaviors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Magnetic Reconnection and Particle Acceleration in the Solar Corona
NASA Astrophysics Data System (ADS)
Neukirch, Thomas
Reconnection plays a major role for the magnetic activity of the solar atmosphere, for example solar flares. An interesting open problem is how magnetic reconnection acts to redistribute the stored magnetic energy released during an eruption into other energy forms, e.g. gener-ating bulk flows, plasma heating and non-thermal energetic particles. In particular, finding a theoretical explanation for the observed acceleration of a large number of charged particles to high energies during solar flares is presently one of the most challenging problems in solar physics. One difficulty is the vast difference between the microscopic (kinetic) and the macro-scopic (MHD) scales involved. Whereas the phenomena observed to occur on large scales are reasonably well explained by the so-called standard model, this does not seem to be the case for the small-scale (kinetic) aspects of flares. Over the past years, observations, in particular by RHESSI, have provided evidence that a naive interpretation of the data in terms of the standard solar flare/thick target model is problematic. As a consequence, the role played by magnetic reconnection in the particle acceleration process during solar flares may have to be reconsidered.
Deformation of two-phase aggregates using standard numerical methods
NASA Astrophysics Data System (ADS)
Duretz, Thibault; Yamato, Philippe; Schmalholz, Stefan M.
2013-04-01
Geodynamic problems often involve the large deformation of material encompassing material boundaries. In geophysical fluids, such boundaries often coincide with a discontinuity in the viscosity (or effective viscosity) field and subsequently in the pressure field. Here, we employ popular implementations of the finite difference and finite element methods for solving viscous flow problems. On one hand, we implemented finite difference method coupled with a Lagrangian marker-in-cell technique to represent the deforming fluid. Thanks to it Eulerian nature, this method has a limited geometric flexibility but is characterized by a light and stable discretization. On the other hand, we employ the Lagrangian finite element method which offers full geometric flexibility at the cost of relatively heavier discretization. In order to test the accuracy of the finite difference scheme, we ran large strain simple shear deformation of aggregates containing either weak of strong circular inclusion (1e6 viscosity ratio). The results, obtained for different grid resolutions, are compared to Lagrangian finite element results which are considered as reference solution. The comparison is then used to establish up to which strain can finite difference simulations be run given the nature of the inclusions (dimensions, viscosity) and the resolution of the Eulerian mesh.
NASA Astrophysics Data System (ADS)
Dong, S.
2018-05-01
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
NASA Astrophysics Data System (ADS)
Murawski, Jens; Kleine, Eckhard
2017-04-01
Sea ice remains one of the frontiers of ocean modelling and is of vital importance for the correct forecasts of the northern oceans. At large scale, it is commonly considered a continuous medium whose dynamics is modelled in terms of continuum mechanics. Its specifics are a matter of constitutive behaviour which may be characterised as rigid-plastic. The new developed sea ice dynamic module bases on general principles and follows a systematic approach to the problem. Both drift field and stress field are modelled by a variational property. Rigidity is treated by Lagrangian relaxation. Thus one is led to a sensible numerical method. Modelling fast ice remains to be a challenge. It is understood that ridging and the formation of grounded ice keels plays a role in the process. The ice dynamic model includes a parameterisation of the stress associated with grounded ice keels. Shear against the grounded bottom contact might lead to plastic deformation and the loss of integrity. The numerical scheme involves a potentially large system of linear equations which is solved by pre-conditioned iteration. The entire algorithm consists of several components which result from decomposing the problem. The algorithm has been implemented and tested in practice.
Asymptotic theory of two-dimensional trailing-edge flows
NASA Technical Reports Server (NTRS)
Melnik, R. E.; Chow, R.
1975-01-01
Problems of laminar and turbulent viscous interaction near trailing edges of streamlined bodies are considered. Asymptotic expansions of the Navier-Stokes equations in the limit of large Reynolds numbers are used to describe the local solution near the trailing edge of cusped or nearly cusped airfoils at small angles of attack in compressible flow. A complicated inverse iterative procedure, involving finite-difference solutions of the triple-deck equations coupled with asymptotic solutions of the boundary values, is used to accurately solve the viscous interaction problem. Results are given for the correction to the boundary-layer solution for drag of a finite flat plate at zero angle of attack and for the viscous correction to the lift of an airfoil at incidence. A rational asymptotic theory is developed for treating turbulent interactions near trailing edges and is shown to lead to a multilayer structure of turbulent boundary layers. The flow over most of the boundary layer is described by a Lighthill model of inviscid rotational flow. The main features of the model are discussed and a sample solution for the skin friction is obtained and compared with the data of Schubauer and Klebanoff for a turbulent flow in a moderately large adverse pressure gradient.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Spin determination at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yavin, Itay
The quantum field theory describing the Electroweak sector demands some new physics at the TeV scale in order to unitarize the scattering of longitudinal W bosons. If this new physics takes the form of a scalar Higgs boson then it is hard to understand the huge hierarchy of scales between the Electroweak scale ˜ TeV and the Planck scale ˜ 1019 GeV. This is known as the Naturalness problem. Normally, in order to solve this problem, new particles, in addition to the Higgs boson, are required to be present in the spectrum below a few TeV. If such particles are indeed discovered at the Large Hadron Collider it will become important to determine their spin. Several classes of models for physics beyond the Electroweak scale exist. Determining the spin of any such newly discovered particle could prove to be the only means of distinguishing between these different models. In the first part of this thesis; we present a thorough discussion regarding such a measurement. We survey the different potentially useful channels for spin determination and a detailed analysis of the most promising channel is performed. The Littlest Higgs model offers a way to solve the Hierarchy problem by introduring heavy partners to Standard Model particles with the same spin and quantum numbers. However, this model is only good up to ˜ 10 TeV. In the second part of this thesis we present an extension of this model into a strongly coupled theory above ˜ 10 TeV. We use the celebrated AdS/CFT correspondence to calculate properties of the low-energy physics in terms of high-energy parameters. We comment on some of the tensions inherent to such a construction involving a large-N CFT (or equivalently, an AdS space).
Liu, Li-Zhi; Wu, Fang-Xiang; Zhang, Wen-Jun
2014-01-01
As an abstract mapping of the gene regulations in the cell, gene regulatory network is important to both biological research study and practical applications. The reverse engineering of gene regulatory networks from microarray gene expression data is a challenging research problem in systems biology. With the development of biological technologies, multiple time-course gene expression datasets might be collected for a specific gene network under different circumstances. The inference of a gene regulatory network can be improved by integrating these multiple datasets. It is also known that gene expression data may be contaminated with large errors or outliers, which may affect the inference results. A novel method, Huber group LASSO, is proposed to infer the same underlying network topology from multiple time-course gene expression datasets as well as to take the robustness to large error or outliers into account. To solve the optimization problem involved in the proposed method, an efficient algorithm which combines the ideas of auxiliary function minimization and block descent is developed. A stability selection method is adapted to our method to find a network topology consisting of edges with scores. The proposed method is applied to both simulation datasets and real experimental datasets. It shows that Huber group LASSO outperforms the group LASSO in terms of both areas under receiver operating characteristic curves and areas under the precision-recall curves. The convergence analysis of the algorithm theoretically shows that the sequence generated from the algorithm converges to the optimal solution of the problem. The simulation and real data examples demonstrate the effectiveness of the Huber group LASSO in integrating multiple time-course gene expression datasets and improving the resistance to large errors or outliers.
SIMS: A Hybrid Method for Rapid Conformational Analysis
Gipson, Bryant; Moll, Mark; Kavraki, Lydia E.
2013-01-01
Proteins are at the root of many biological functions, often performing complex tasks as the result of large changes in their structure. Describing the exact details of these conformational changes, however, remains a central challenge for computational biology due the enormous computational requirements of the problem. This has engendered the development of a rich variety of useful methods designed to answer specific questions at different levels of spatial, temporal, and energetic resolution. These methods fall largely into two classes: physically accurate, but computationally demanding methods and fast, approximate methods. We introduce here a new hybrid modeling tool, the Structured Intuitive Move Selector (sims), designed to bridge the divide between these two classes, while allowing the benefits of both to be seamlessly integrated into a single framework. This is achieved by applying a modern motion planning algorithm, borrowed from the field of robotics, in tandem with a well-established protein modeling library. sims can combine precise energy calculations with approximate or specialized conformational sampling routines to produce rapid, yet accurate, analysis of the large-scale conformational variability of protein systems. Several key advancements are shown, including the abstract use of generically defined moves (conformational sampling methods) and an expansive probabilistic conformational exploration. We present three example problems that sims is applied to and demonstrate a rapid solution for each. These include the automatic determination of “active” residues for the hinge-based system Cyanovirin-N, exploring conformational changes involving long-range coordinated motion between non-sequential residues in Ribose-Binding Protein, and the rapid discovery of a transient conformational state of Maltose-Binding Protein, previously only determined by Molecular Dynamics. For all cases we provide energetic validations using well-established energy fields, demonstrating this framework as a fast and accurate tool for the analysis of a wide range of protein flexibility problems. PMID:23935893
Voisin, Dexter R.; Kim, Dongha; Takahashi, Lois; Morotta, Phillip; Bocanegra, Kathryn
2017-01-01
While researchers have found that African American youth experience higher levels of juvenile justice involvement at every system level (arrest, sentencing, and incarceration) relative to their other ethnic counterparts, few studies have explored how juvenile justice involvement and number of contacts might be correlated with this broad range of problems. A convenience sample of 638 African American adolescents living in predominantly low-income, urban communities participated in a survey related to juvenile justice involvement. Major findings using logistic regression models indicated that adolescents who reported juvenile justice system involvement versus no involvement were 2.3 times as likely to report mental health problems, substance abuse, and delinquent or youth offending behaviors. Additional findings documented that the higher the number of juvenile justice system contacts, the higher the rates of delinquent behaviors, alcohol and marijuana use, sex while high on drugs, and commercial sex. These findings suggest that identifying and targeting youth who have multiple juvenile justice system contacts, especially those in low-resourced communities for early intervention services, may be beneficial. Future research should examine whether peer network norms might mediate the relationships between juvenile justice involvement and youth problem behaviors. PMID:28966415
Voisin, Dexter R; Kim, Dongha; Takahashi, Lois; Morotta, Phillip; Bocanegra, Kathryn
2017-01-01
While researchers have found that African American youth experience higher levels of juvenile justice involvement at every system level (arrest, sentencing, and incarceration) relative to their other ethnic counterparts, few studies have explored how juvenile justice involvement and number of contacts might be correlated with this broad range of problems. A convenience sample of 638 African American adolescents living in predominantly low-income, urban communities participated in a survey related to juvenile justice involvement. Major findings using logistic regression models indicated that adolescents who reported juvenile justice system involvement versus no involvement were 2.3 times as likely to report mental health problems, substance abuse, and delinquent or youth offending behaviors. Additional findings documented that the higher the number of juvenile justice system contacts, the higher the rates of delinquent behaviors, alcohol and marijuana use, sex while high on drugs, and commercial sex. These findings suggest that identifying and targeting youth who have multiple juvenile justice system contacts, especially those in low-resourced communities for early intervention services, may be beneficial. Future research should examine whether peer network norms might mediate the relationships between juvenile justice involvement and youth problem behaviors.
Pressure-Volume Work Exercises Illustrating the First and Second Laws.
ERIC Educational Resources Information Center
Hoover, William G.; Moran, Bill
1979-01-01
Presented are two problem exercises involving rapid compression and expansion of ideal gases which illustrate the first and second laws of thermodynamics. The first problem involves the conversion of gravitational energy into heat through mechanical work. The second involves the mutual interaction of two gases through an adiabatic piston. (BT)
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Managing Problem-Based Learning in Large Lecture Sections
ERIC Educational Resources Information Center
Bledsoe, Karen E.
2011-01-01
Problem-based learning can enhance reasoning and concept development among undergraduate college students by presenting content within authentic contexts. However, large lecture sections present problems and barriers to implementing PBL. This article discusses approaches used by the author to infuse PBL into large biology lecture sections, and…
The design of a long range megatransport aircraft
NASA Technical Reports Server (NTRS)
Weisshaar, T. A.; Layton, J. B.; Allen, C. L.
1993-01-01
Megatransport objectives and constraints are briefly reviewed, and certain solutions developed by student design teams at Perdue University are summarized. Particular attention is given to the market needs and the economic risks involved in such a project; and the different approaches taken to solve the problem and difficulties faced by the design teams. A long range megatransport aircraft is aimed at carrying more than 600 passengers at reduced cost, and at the same time, reducing airport and airway congestion. The design effort must take into account airport terminal facilities; passenger loading and unloading; and defeating the 'square-cube' law to design large structures.
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
What causes the buoyancy reversal in compressible convection?
NASA Technical Reports Server (NTRS)
Chan, K. L.
1983-01-01
The problem posed by the existence of a negative buoyancy work region at the top of cellular type convection in a deeply stratified superadiabatic layer (Massaguer and Zahn, 1980) is addressed. It is approached by studying two-dimensional cellular compressible convection with different physical parameters. The results suggest that a large viscosity, together with density stratification, is responsible for the buoyancy reversal. The numerical results obtained are analyzed. It is pointed out, however, that in an astrophysical situation a fluid involved in convection will generally have very small viscosity. It is therefore thought unlikely that buoyancy reversal occurs in this way.
Study of aircraft electrical power systems
NASA Technical Reports Server (NTRS)
1972-01-01
The formulation of a philosophy for devising a reliable, efficient, lightweight, and cost effective electrical power system for advanced, large transport aircraft in the 1980 to 1985 time period is discussed. The determination and recommendation for improvements in subsystems and components are also considered. All aspects of the aircraft electrical power system including generation, conversion, distribution, and utilization equipment were considered. Significant research and technology problem areas associated with the development of future power systems are identified. The design categories involved are: (1) safety-reliability, (2) power type, voltage, frequency, quality, and efficiency, (3) power control, and (4) selection of utilization equipment.
Nonlinear behavior of shells of revolution under cyclic loading.
NASA Technical Reports Server (NTRS)
Levine, H. S.; Armen, H., Jr.; Winter, R.; Pifko, A.
1973-01-01
A large deflection elastic-plastic analysis is presented applicable to orthotropic axisymmetric plates and shells of revolution subjected to monotonic and cyclic loading conditions. The analysis is based on the finite-element method. It employs a new higher order, fully compatible, doubly curved orthotropic shell-of-revolution element using cubic Hermitian expansions for both meridional and normal displacements. Both perfectly plastic and strain hardening behavior are considered. Strain hardening is incorporated through use of the Prager-Ziegler kinematic hardening theory, which predicts an ideal Bauschinger effect. Numerous sample problems involving monotonic and cyclic loading conditions are analyzed.
Vehicle fault diagnostics and management system
NASA Astrophysics Data System (ADS)
Gopal, Jagadeesh; Gowthamsachin
2017-11-01
This project is a kind of advanced automatic identification technology, and is more and more widely used in the fields of transportation and logistics. It looks over the main functions with like Vehicle management, Vehicle Speed limit and Control. This system starts with authentication process to keep itself secure. Here we connect sensors to the STM32 board which in turn is connected to the car through Ethernet cable, as Ethernet in capable of sending large amounts of data at high speeds. This technology involved clearly shows how a careful combination of software and hardware can produce an extremely cost-effective solution to a problem.
The Rhode Island Medical Emergency Distribution System (MEDS).
Banner, Greg
2004-01-01
The State of Rhode Island conducted an exercise to obtain and dispense a large volume of emergency medical supplies in response to a mass casualty incident. The exercise was conducted in stages that included requesting supplies from the Strategic National Stockpile and distributing the supplies around the state. The lessons learned included how to better structure an exercise, what types of problems were encountered with requesting and distributing supplies, how to better work with members of the private medical community who are not involved in disaster planning, and how to become aware of the needs of special population groups.
The growth receptors and their role in wound healing.
Rolfe, Kerstin J; Grobbelaar, Adriaan O
2010-11-01
Abnormal wound healing is a major problem in healthcare today, with both scarring and chronic wounds affecting large numbers of individuals worldwide. Wound healing is a complex process involving several variables, including growth factors and their receptors. Chronic wounds fail to complete the wound healing process, while scarring is considered to be an overzealous wound healing process. Growth factor receptors and their ligands are being investigated to assess their potential in the development of therapeutic strategies to improve wound healing. This review discusses potential therapeutics for manipulating growth factors and their corresponding receptors for the treatment of abnormal wound healing.
Comparison of numerical simulation and experimental data for steam-in-place sterilization
NASA Technical Reports Server (NTRS)
Young, Jack H.; Lasher, William C.
1993-01-01
A complex problem involving convective flow of a binary mixture containing a condensable vapor and noncondensable gas in a partially enclosed chamber was modelled and results compared to transient experimental values. The finite element model successfully predicted transport processes in dead-ended tubes with inside diameters of 0.4 to 1.0 cm. When buoyancy driven convective flow was dominant, temperature and mixture compositions agreed with experimental data. Data from 0.4 cm tubes indicate diffusion to be the primary air removal method in small diameter tubes and the diffusivity value in the model to be too large.
Does drinking refusal self-efficacy mediate the impulsivity-problematic alcohol use relation?
Stevens, Angela K; Littlefield, Andrew K; Blanchard, Brittany E; Talley, Amelia E; Brown, Jennifer L
2016-02-01
There is consistent evidence that impulsivity-like traits relate to problematic alcohol involvement; however, identifying mechanisms that account for this relation remains an important area of research. Drinking refusal self-efficacy (or a person's ability to resist alcohol; DRSE) has been shown to predict alcohol use among college students and may be a relevant mediator of the impulsivity-alcohol relation. The current study examined the indirect effect of various constructs related to impulsivity (i.e., urgency, sensation seeking, and deficits in conscientiousness) via several facets of DRSE (i.e., social pressure, opportunistic, and emotional relief) on alcohol-related problems among a large sample of college students (N=891). Overall, results indicated that certain DRSE facets were significant mediators of the relation between impulsivity-related constructs and alcohol problems. More specifically, emotional-relief DRSE was a mediator for the respective relations between urgency and deficits in conscientiousness and alcohol problems, whereas social-DRSE was a significant mediator of the respective relations between urgency and sensation seeking with alcohol problems. Results from this study suggest particular types of DRSE are important mediators of the relations between specific impulsivity constructs and alcohol-related problems. These findings support prevention and intervention efforts that seek to enhance drinking refusal self-efficacy skills of college students, particularly those high in certain personality features, in order to reduce alcohol-related problems among this population. Copyright © 2015 Elsevier Ltd. All rights reserved.
MORGENSTERN, JON; HOGUE, AARON; DASARO, CHRISTOPHER; KUERBIS, ALEXIS; DAUBER, SARAH
2016-01-01
Objective This study examined barriers to employability, motivation to abstain from substances and to work, and involvement in multiple service systems among male and female welfare applicants with alcohol- and drug-use problems. Method A representative sample (N = 1,431) of all persons applying for public assistance who screened positive for substance involvement over a 2-year period in a large urban county were recruited in welfare offices. Legal, education, general health, mental health, employment, housing, and child welfare barriers to employability were assessed, as were readiness to abstain from substance use and readiness to work. Results Only 1 in 20 participants reported no barrier other than substance use, whereas 70% reported at least two other barriers and 40% reported three or more. Moreover, 70% of participants experienced at least one additional barrier classified as “severe” and 30% experienced two or more. The number and type of barriers differed by gender. Latent class analysis revealed four main barriers-plus-readiness profiles among participants: (1) multiple barriers, (2) work experienced, (3) criminal justice, and (4) unstable housing. Conclusions Findings suggest that comprehensive coordination among social service systems is needed to address the complex problems of low-income Americans with substance-use disorders. Classifying applicants based on barriers and readiness is a promising approach to developing innovative welfare programs to serve the diverse needs of men and women with substance-related problems. PMID:18612572
Crossover ensembles of random matrices and skew-orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in
2011-08-15
Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less
Forehand, Rex; Parent, Justin; Golub, Andrew; Reid, Megan; Lafko, Nicole
2018-01-01
Cohabitation is a family structure that is rapidly increasing in the United States. The current longitudinal study examined the interplay of involvement in a youth’s daily activities and firm control parenting by male cohabiting partners (MCPs) on change in adolescents’ internalizing and externalizing problems. In a sample of 111 inner-city African American families, adolescents reported on involvement and parenting by MCPs at wave 1 and biological mothers reported on adolescent problem behaviors at waves 1 and 2. A significant interaction indicated that low involvement and low firm control by MCPs at wave 1 were associated with the highest level of internalizing problems at wave 2. An interaction did not emerge when externalizing problems served as the outcome. The findings indicate that male partners play an important role in parenting adolescents in cohabiting families and should be considered as potential participants in prevention and intervention programs. PMID:26007695
A Comparative Study of Involvement and Motivation among Casino Gamblers.
Lee, Choong-Ki; Lee, Bongkoo; Bernhard, Bo Jason; Lee, Tae Kyung
2009-09-01
The purpose of this paper is to investigate three different types of gamblers (which we label "non-problem", "some problem", and "probable pathological gamblers") to determine differences in involvement and motivation, as well as differences in demographic and behavioral variables. The analysis takes advantage of a unique opportunity to sample on-site at a major casino in South Korea, and the resulting purposive sample yielded 180 completed questionnaires in each of the three groups, for a total number of 540. Factor analysis, analysis of variance (ANOVA) and Duncan tests, and Chi-square tests are employed to analyze the data collected from the survey. Findings from ANOVA tests indicate that involvement factors of importance/self-expression, pleasure/interest, and centrality derived from the factor analysis were significantly different among these three types of gamblers. The "probable pathological" and "some problem" gamblers were found to have similar degrees of involvement, and higher degrees of involvement than the non-problem gamblers. The tests also reveal that motivational factors of escape, socialization, winning, and exploring scenery were significantly different among these three types of gamblers. When looking at motivations to visit the casino, "probable pathological" gamblers were more likely to seek winning, the "some problem" group appeared to be more likely to seek escape, and the "non-problem" gamblers indicate that their motivations to visit centered around explorations of scenery and culture in the surrounding casino area. The tools for exploring motivations and involvements of gambling provide valuable and discerning information about the entire spectrum of gamblers.
Future aircraft networks and schedules
NASA Astrophysics Data System (ADS)
Shu, Yan
2011-07-01
Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.
Phase transitions in distributed control systems with multiplicative noise
NASA Astrophysics Data System (ADS)
Allegra, Nicolas; Bamieh, Bassam; Mitra, Partha; Sire, Clément
2018-01-01
Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subject to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper, we analyze the so called network consensus problem, prototypical of the above considerations, that has previously been analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions, and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a ‘toy’ model, we believe that the qualitative features are generic, with implications for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.
The Music of Mathematics: Toward a New Problem Typology
NASA Astrophysics Data System (ADS)
Quarfoot, David
Halmos (1980) once described problems and their solutions as "the heart of mathematics". Following this line of thinking, one might naturally ask: "What, then, is the heart of problems?". In this work, I attempt to answer this question using techniques from statistics, information visualization, and machine learning. I begin the journey by cataloging the features of problems delineated by the mathematics and mathematics education communities. These dimensions are explored in a large data set of students working thousands of problems at the Art of Problem Solving, an online company that provides adaptive mathematical training for students around the world. This analysis is able to concretely show how the fabric of mathematical problems changes across different subjects, difficulty levels, and students. Furthermore, it locates problems that stand out in the crowd -- those that synergize cognitive engagement, learning, and difficulty. This quantitatively-heavy side of the dissertation is partnered with a qualitatively-inspired portion that involves human scoring of 105 problems and their solutions. In this setting, I am able to capture elusive features of mathematical problems and derive a fuller picture of the space of mathematical problems. Using correlation matrices, principal components analysis, and clustering techniques, I explore the relationships among those features frequently discussed in mathematics problems (e.g., difficulty, creativity, novelty, affective engagement, authenticity). Along the way, I define a new set of uncorrelated features in problems and use these as the basis for a New Mathematical Problem Typology (NMPT). Grounded in the terminology of classical music, the NMPT works to quickly convey the essence and value of a problem, just as terms like "etude" and "mazurka" do for musicians. Taken together, these quantitative and qualitative analyses seek to terraform the landscape of mathematical problems and, concomitantly, the current thinking about that world. Most importantly, this work highlights and names the panoply of problems that exist, expanding the myopic vision of contemporary mathematical problem solving.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
Narasingharao, Kumar; Pradhan, Balaram; Navaneetham, Janardhana
2017-03-01
Autism Spectrum Disorder (ASD) is a neuro developmental disorder which appears at early childhood age between 18 and 36 months. Apart from behaviour problems ASD children also suffer from sleep and Gastrointestinal (GI) problems. Major behaviour problems of ASD children are lack of social communication and interaction, less attention span, repetitive and restrictive behaviour, lack of eye to eye contact, aggressive and self-injurious behaviours, sensory integration problems, motor problems, deficiency in academic activities, anxiety and depression etc. Our hypothesis is that structured yoga intervention will brings significant changes in the problems of ASD children. The aim of this study was to find out efficacy of structured yoga intervention for sleep problems, gastrointestinal problems and behaviour problems of ASD children. It was an exploratory study with pre-test and post-test control design. Three sets of questionnaires having 61 questions developed by researchers were used to collect data pre and post yoga intervention. Questionnaires were based on three problematic areas of ASD children as mentioned above and were administered to parents by teachers under the supervision of researcher and clinical psychologists. Experimental group was given yoga intervention for a period of 90 days and control group continued with school curriculum. Both children and parents participated in this intervention. Significant changes were seen post yoga intervention in three areas of problems as mentioned above. Statistical analysis also showed significance value of 0.001 in the result. Structured yoga intervention can be conducted for a large group of ASD children with parent's involvement. Yoga can be used as alternative therapy to reduce the severity of symptoms of ASD children.
Berteletti, Ilaria; Prado, Jérôme; Booth, James R
2014-08-01
Greater skill in solving single-digit multiplication problems requires a progressive shift from a reliance on numerical to verbal mechanisms over development. Children with mathematical learning disability (MD), however, are thought to suffer from a specific impairment in numerical mechanisms. Here we tested the hypothesis that this impairment might prevent MD children from transitioning toward verbal mechanisms when solving single-digit multiplication problems. Brain activations during multiplication problems were compared in MD and typically developing (TD) children (3rd to 7th graders) in numerical and verbal regions which were individuated by independent localizer tasks. We used small (e.g., 2 × 3) and large (e.g., 7 × 9) problems as these problems likely differ in their reliance on verbal versus numerical mechanisms. Results indicate that MD children have reduced activations in both the verbal (i.e., left inferior frontal gyrus and left middle temporal to superior temporal gyri) and the numerical (i.e., right superior parietal lobule including intra-parietal sulcus) regions suggesting that both mechanisms are impaired. Moreover, the only reliable activation observed for MD children was in the numerical region when solving small problems. This suggests that MD children could effectively engage numerical mechanisms only for the easier problems. Conversely, TD children showed a modulation of activation with problem size in the verbal regions. This suggests that TD children were effectively engaging verbal mechanisms for the easier problems. Moreover, TD children with better language skills were more effective at engaging verbal mechanisms. In conclusion, results suggest that the numerical- and language-related processes involved in solving multiplication problems are impaired in MD children. Published by Elsevier Ltd.
Pradhan, Balaram; Navaneetham, Janardhana
2017-01-01
Introduction Autism Spectrum Disorder (ASD) is a neuro developmental disorder which appears at early childhood age between 18 and 36 months. Apart from behaviour problems ASD children also suffer from sleep and Gastrointestinal (GI) problems. Major behaviour problems of ASD children are lack of social communication and interaction, less attention span, repetitive and restrictive behaviour, lack of eye to eye contact, aggressive and self-injurious behaviours, sensory integration problems, motor problems, deficiency in academic activities, anxiety and depression etc. Our hypothesis is that structured yoga intervention will brings significant changes in the problems of ASD children. Aim The aim of this study was to find out efficacy of structured yoga intervention for sleep problems, gastrointestinal problems and behaviour problems of ASD children. Materials and Methods It was an exploratory study with pre-test and post-test control design. Three sets of questionnaires having 61 questions developed by researchers were used to collect data pre and post yoga intervention. Questionnaires were based on three problematic areas of ASD children as mentioned above and were administered to parents by teachers under the supervision of researcher and clinical psychologists. Experimental group was given yoga intervention for a period of 90 days and control group continued with school curriculum. Results Both children and parents participated in this intervention. Significant changes were seen post yoga intervention in three areas of problems as mentioned above. Statistical analysis also showed significance value of 0.001 in the result. Conclusion Structured yoga intervention can be conducted for a large group of ASD children with parent’s involvement. Yoga can be used as alternative therapy to reduce the severity of symptoms of ASD children. PMID:28511484
ERIC Educational Resources Information Center
Locke, Benjamin D.; Mahalik, James R.
2005-01-01
Male sexual aggression toward women is a serious social problem, particularly on college campuses. In this study, college men's sexually aggressive behavior and rape myth acceptance were examined using conformity to 11 masculine norms and 2 variables previously linked to sexual aggression: problem drinking and athletic involvement. Results…
Family Involvement in Preschool Education: Rationale, Problems and Solutions for the Participants
ERIC Educational Resources Information Center
Kocyigit, Sinan
2015-01-01
This aim of this study is to examine the views of teachers, administrators and parents about the problems that emerge during family involvement in preschool activities and solutions for these problems. The participants were 10 teachers, 10 parents and 10 administrators from 4 preschools and 6 kindergartens in the Palandöken and Yakutiye districts…
Non-Abelian fermionization and fractional quantum Hall transitions
NASA Astrophysics Data System (ADS)
Hui, Aaron; Mulligan, Michael; Kim, Eun-Ah
2018-02-01
There has been a recent surge of interest in dualities relating theories of Chern-Simons gauge fields coupled to either bosons or fermions within the condensed matter community, particularly in the context of topological insulators and the half-filled Landau level. Here, we study the application of one such duality to the long-standing problem of quantum Hall interplateaux transitions. The key motivating experimental observations are the anomalously large value of the correlation length exponent ν ≈2.3 and that ν is observed to be superuniversal, i.e., the same in the vicinity of distinct critical points [Sondhi et al., Rev. Mod. Phys. 69, 315 (1997), 10.1103/RevModPhys.69.315]. Duality motivates effective descriptions for a fractional quantum Hall plateau transition involving a Chern-Simons field with U (Nc) gauge group coupled to Nf=1 fermion. We study one class of theories in a controlled limit where Nf≫Nc and calculate ν to leading nontrivial order in the absence of disorder. Although these theories do not yield an anomalously large exponent ν within the large Nf≫Nc expansion, they do offer a new parameter space of theories that is apparently different from prior works involving Abelian Chern-Simons gauge fields [Wen and Wu, Phys. Rev. Lett. 70, 1501 (1993), 10.1103/PhysRevLett.70.1501; Chen et al., Phys. Rev. B 48, 13749 (1993), 10.1103/PhysRevB.48.13749].
Goulart Coelho, Lineker M; Lange, Liséte C; Coelho, Hosmanny Mg
2017-01-01
Solid waste management is a complex domain involving the interaction of several dimensions; thus, its analysis and control impose continuous challenges for decision makers. In this context, multi-criteria decision-making models have become important and convenient supporting tools for solid waste management because they can handle problems involving multiple dimensions and conflicting criteria. However, the selection of the multi-criteria decision-making method is a hard task since there are several multi-criteria decision-making approaches, each one with a large number of variants whose applicability depends on information availability and the aim of the study. Therefore, to support researchers and decision makers, the objectives of this article are to present a literature review of multi-criteria decision-making applications used in solid waste management, offer a critical assessment of the current practices, and provide suggestions for future works. A brief review of fundamental concepts on this topic is first provided, followed by the analysis of 260 articles related to the application of multi-criteria decision making in solid waste management. These studies were investigated in terms of the methodology, including specific steps such as normalisation, weighting, and sensitivity analysis. In addition, information related to waste type, the study objective, and aspects considered was recorded. From the articles analysed it is noted that studies using multi-criteria decision making in solid waste management are predominantly addressed to problems related to municipal solid waste involving facility location or management strategy.
Controls, health assessment, and conditional monitoring for large, reusable, liquid rocket engines
NASA Technical Reports Server (NTRS)
Cikanek, H. A., III
1986-01-01
Past and future progress in the performance of control systems for large, liquid rocket engines typified such as current state-of-the-art, the Shuttle Main Engine (SSME), is discussed. Details of the first decade of efforts, which culminates in the F-1 and J-2 Saturn engines control systems, are traced, noting problem modes and improvements which were implemented to realize the SSME. Future control system designs, to accommodate the requirements of operation of engines for a heavy lift launch vehicle, an orbital transfer vehicle and the aerospace plane, are summarized. Generic design upgrades needed include an expanded range of fault detection, maintenance as-needed instead of as-scheduled, reduced human involvement in engine operations, and increased control of internal engine states. Current NASA technology development programs aimed at meeting the future control system requirements are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.
Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less
Small-size pedestrian detection in large scene based on fast R-CNN
NASA Astrophysics Data System (ADS)
Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu
2018-04-01
Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.
NASA Astrophysics Data System (ADS)
Bayu Bati, Tesfaye; Gelderblom, Helene; van Biljon, Judy
2014-01-01
The challenge of teaching programming in higher education is complicated by problems associated with large class teaching, a prevalent situation in many developing countries. This paper reports on an investigation into the use of a blended learning approach to teaching and learning of programming in a class of more than 200 students. A course and learning environment was designed by integrating constructivist learning models of Constructive Alignment, Conversational Framework and the Three-Stage Learning Model. Design science research is used for the course redesign and development of the learning environment, and action research is integrated to undertake participatory evaluation of the intervention. The action research involved the Students' Approach to Learning survey, a comparative analysis of students' performance, and qualitative data analysis of data gathered from various sources. The paper makes a theoretical contribution in presenting a design of a blended learning solution for large class teaching of programming grounded in constructivist learning theory and use of free and open source technologies.
Complete physico-chemical treatment for coke plant effluents.
Ghose, M K
2002-03-01
Naturally found coal is converted to coke which is suitable for metallurgical industries. Large quantities of liquid effluents produced contain a large amount of suspended solids, high COD, BOD, phenols, ammonia and other toxic substances which are causing serious pollution problem in the receiving water to which they are discharged. There are a large number of coke plants in the vicinity of Jharia Coal Field (JCF). Characteristics of the effluents have been evaluated. The present effluent treatment systems were found to be inadequate. Physico-chemical treatment has been considered as a suitable option for the treatment of coke plant effluents. Ammonia removal by synthetic zeolite, activated carbon for the removal of bacteria, viruses, refractory organics, etc. were utilized and the results are discussed. A scheme has been proposed for the complete physico-chemical treatment, which can be suitably adopted for the recycling, reuse and safe disposal of the treated effluent. Various unit process and unit operations involved in the treatment system have been discussed. The process may be useful on industrial scale at various sites.
Kannry, Joseph; Mukani, Sonia; Myers, Kristin
2006-01-01
The experience of Mount Sinai Hospital is representative of the challenges and problems facing large academic medical centers in selecting an ambulatory EMR. The facility successfully revived a stalled process in a challenging financial climate, using a framework of science and rigorous investigation. The process incorporated several innovations: 1) There was a thorough review of medical informatics literature to develop a mission statement, determine practical objectives and guide the demonstration process; 2) The process involved rigorous investigation of vendor statements, industry statements and other institution's views of vendors; 3) The initiative focused on user-centric selection, and the survey instrument was scientifically and specifically designed to assess user feedback; 4) There was scientific analysis of validated findings and survey results at all steering meetings; 5) The process included an assessment of vendors' ability to support research by identifying funded and published research; 6) Selection involved meticulous total cost of ownership analysis to assess and compare real costs of implementing a vendor solution; and finally, 7) There were iterative meetings with stakeholders, executives and users to understand needs, address concerns and communicate the vision.
Parent Involvement and Children's Academic and Social Development in Elementary School
El Nokali, Nermeen E.; Bachman, Heather J.; Votruba-Drzal, Elizabeth
2010-01-01
Data from the NICHD Study of Early Childcare and Youth Development (N= 1364) were used to investigate children's trajectories of academic and social development across first, third and fifth grade. Hierarchical linear modeling was used to examine within- and between-child associations among maternal- and teacher-reports of parent involvement and children's standardized achievement scores, social skills, and problem behaviors. Findings suggest that within-child improvements in parent involvement predict declines in problem behaviors and improvements in social skills but do not predict changes in achievement. Between-child analyses demonstrated that children with highly involved parents had enhanced social functioning and fewer behavior problems. Similar patterns of findings emerged for teacher- and parent-reports of parent involvement. Implications for policy and practice are discussed. PMID:20573118
Outsourcing to increase service capacity in a New Zealand hospital.
Renner, C; Palmer, E
1999-01-01
Service firms manage variability using both demand-side tactics (levelling customer demand), and supply-side tactics (increasing available capacity). One popular way of increasing available capacity is the outsourcing of non-core services. This article uses a case study to examine the impact of an outsourced non-core service on a hospital's overall service system. Findings show that the outsourced service provides access to more sophisticated technology, increases in-house capacity and saves capital expenditure. However, the outsourcing also increases the scheduling problems that the hospital faces. These problems are largely due to communication delays from the involvement of more than one organisation. These delays decrease the response time available to match changes in demand for the outsourced service. Given the obvious benefits of such outsourcing, the article concludes that management should pay close attention to the communication pathways between organisations, in order to minimise the end effects identified in this study.
Some aerodynamic discoveries and related NACA/NASA research programs following World War 2
NASA Technical Reports Server (NTRS)
Spearman, M. L.
1984-01-01
The World War 2 time period ushered in a new era in aeronautical research and development. The air conflict during the war highlighted the need of aircraft with agility, high speed, long range, large payload capability, and in addition, introduced a new concept in air warfare through the use of guided missiles. Following the war, the influx of foreign technology, primarily German, led to rapid advances in jet propulsion and speed, and a host of new problem areas associated with high-speed flight designs were revealed. The resolution of these problems led to a rash of new design concepts and many of the lessons learned, in principle, are still effective today. In addition to the technical lessons learned related to aircraft development programs, it might also be noted that some lessons involving the political and philosophical nature of aircraft development programs are worth attention.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
NASA Astrophysics Data System (ADS)
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Tuberculosis--triumph and tragedy.
Singh, M M
2003-03-01
Tuberculosis has been making havoc worldwide with an 11.9 million cases to be involved by the year 2005. In India, about 2 million cases are infected every year. Regarding triumphs and tragedies in the control of tuberculosis some points as follows are discussed. (1) Tuberculosis Control Programmes from National Tuberculosis Programme (NTP) to Revised National Tuberculosis Control Programme (RNTCP) and Directly Observed Treatment, Short course (DOTS). (2) Problem of multidrug resistance (MDR) tuberculosis and (3) HIV and tuberculosis. DOTS being largely based on Indian research. It is now being applied worldwide. MDR is strictly a man made problem. Poor prescriptions, poor case management, lack of coordinated education and haphazard treatment research result in drug resistance. Treatment of MDR is difficult. The drug acceptability, tolerance and toxicity have to be considered. HIV and tuberculosis form a deadly duo. They mean more cases, more costs and more national losses.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Deflection of a flexural cantilever beam
NASA Astrophysics Data System (ADS)
Sherbourne, A. N.; Lu, F.
The behavior of a flexural elastoplastic cantilever beam is investigated in which geometric nonlinearities are considered. The result of an elastica analysis by Frisch-Fay (1962) is extended to include postyield behavior. Although a closed-form solution is not possible, as in the elastic case, simple algebraic equations are derived involving only one unknown variable, which can also be expressed in the standard form of elliptic integrals if so desired. The results, in comparison with those of the small deflection analyses, indicate that large deflection analyses are necessary when the relative depth of the beam is very small over the length. The present exact solution can be used as a reference by those who resort to a finite element method for more complicated problems. It can also serve as a building block to other beam problems such as a simply supported beam or a beam with multiple loads.
Simulation of Acoustic Scattering from a Trailing Edge
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Brentner, Kenneth S.; Lockard, David P.; Lilley, Geoffrey M.
1999-01-01
Three model problems were examined to assess the difficulties involved in using a hybrid scheme coupling flow computation with the the Ffowcs Williams and Hawkings equation to predict noise generated by vortices passing over a sharp edge. The results indicate that the Ffowcs Williams and Hawkings equation correctly propagates the acoustic signals when provided with accurate flow information on the integration surface. The most difficult of the model problems investigated inviscid flow over a two-dimensional thin NACA airfoil with a blunt-body vortex generator positioned at 98 percent chord. Vortices rolled up downstream of the blunt body. The shed vortices possessed similarities to large coherent eddies in boundary layers. They interacted and occasionally paired as they convected past the sharp trailing edge of the airfoil. The calculations showed acoustic waves emanating from the airfoil trailing edge. Acoustic directivity and Mach number scaling are shown.
Luneburg lens and optical matrix algebra research
NASA Technical Reports Server (NTRS)
Wood, V. E.; Busch, J. R.; Verber, C. M.; Caulfield, H. J.
1984-01-01
Planar, as opposed to channelized, integrated optical circuits (IOCs) were stressed as the basis for computational devices. Both fully-parallel and systolic architectures are considered and the tradeoffs between the two device types are discussed. The Kalman filter approach is a most important computational method for many NASA problems. This approach to deriving a best-fit estimate for the state vector describing a large system leads to matrix sizes which are beyond the predicted capacities of planar IOCs. This problem is overcome by matrix partitioning, and several architectures for accomplishing this are described. The Luneburg lens work has involved development of lens design techniques, design of mask arrangements for producing lenses of desired shape, investigation of optical and chemical properties of arsenic trisulfide films, deposition of lenses both by thermal evaporation and by RF sputtering, optical testing of these lenses, modification of lens properties through ultraviolet irradiation, and comparison of measured lens properties with those expected from ray trace analyses.
NASA Astrophysics Data System (ADS)
Rozylo, Patryk; Teter, Andrzej; Debski, Hubert; Wysmulski, Pawel; Falkowicz, Katarzyna
2017-10-01
The object of the research are short, thin-walled columns with an open top-hat cross section made of multilayer laminate. The walls of the investigated profiles are made of plate elements. The entire columns are subjected to uniform compression. A detailed analysis allowed us to determine critical forces and post-critical equilibrium paths. It is assumed that the columns are articulately supported on the edges forming their ends. The numerical investigation is performed by the finite element method. The study involves solving the problem of eigenvalue and the non-linear problem of stability of the structure. The numerical analysis is performed by the commercial simulation software ABAQUS®. The numerical results are then validated experimentally. In the discussed cases, it is assumed that the material operates within a linearly-elastic range, and the non-linearity of the FEM model is due to large displacements.
Numerical Simulation of Black Holes
NASA Astrophysics Data System (ADS)
Teukolsky, Saul
2003-04-01
Einstein's equations of general relativity are prime candidates for numerical solution on supercomputers. There is some urgency in being able to carry out such simulations: Large-scale gravitational wave detectors are now coming on line, and the most important expected signals cannot be predicted except numerically. Problems involving black holes are perhaps the most interesting, yet also particularly challenging computationally. One difficulty is that inside a black hole there is a physical singularity that cannot be part of the computational domain. A second difficulty is the disparity in length scales between the size of the black hole and the wavelength of the gravitational radiation emitted. A third difficulty is that all existing methods of evolving black holes in three spatial dimensions are plagued by instabilities that prohibit long-term evolution. I will describe the ideas that are being introduced in numerical relativity to deal with these problems, and discuss the results of recent calculations of black hole collisions.
A boundedness result for the direct heuristic dynamic programming.
Liu, Feng; Sun, Jian; Si, Jennie; Guo, Wentao; Mei, Shengwei
2012-08-01
Approximate/adaptive dynamic programming (ADP) has been studied extensively in recent years for its potential scalability to solve large state and control space problems, including those involving continuous states and continuous controls. The applicability of ADP algorithms, especially the adaptive critic designs has been demonstrated in several case studies. Direct heuristic dynamic programming (direct HDP) is one of the ADP algorithms inspired by the adaptive critic designs. It has been shown applicable to industrial scale, realistic and complex control problems. In this paper, we provide a uniformly ultimately boundedness (UUB) result for the direct HDP learning controller under mild and intuitive conditions. By using a Lyapunov approach we show that the estimation errors of the learning parameters or the weights in the action and critic networks remain UUB. This result provides a useful controller convergence guarantee for the first time for the direct HDP design. Copyright © 2012 Elsevier Ltd. All rights reserved.
Edenberg, Howard J; Foroud, Tatiana
2013-08-01
Alcohol is widely consumed; however, excessive use creates serious physical, psychological and social problems and contributes to the pathogenesis of many diseases. Alcohol use disorders (that is, alcohol dependence and alcohol abuse) are maladaptive patterns of excessive drinking that lead to serious problems. Abundant evidence indicates that alcohol dependence (alcoholism) is a complex genetic disease, with variations in a large number of genes affecting a person's risk of alcoholism. Some of these genes have been identified, including two genes involved in the metabolism of alcohol (ADH1B and ALDH2) that have the strongest known affects on the risk of alcoholism. Studies continue to reveal other genes in which variants affect the risk of alcoholism or related traits, including GABRA2, CHRM2, KCNJ6 and AUTS2. As more variants are analysed and studies are combined for meta-analysis to achieve increased sample sizes, an improved picture of the many genes and pathways that affect the risk of alcoholism will be possible.
Problem Solving Process Research of Everyone Involved in Innovation Based on CAI Technology
NASA Astrophysics Data System (ADS)
Chen, Tao; Shao, Yunfei; Tang, Xiaowo
It is very important that non-technical department personnel especially bottom line employee serve as innovators under the requirements of everyone involved in innovation. According the view of this paper, it is feasible and necessary to build everyone involved in innovation problem solving process under Total Innovation Management (TIM) based on the Theory of Inventive Problem Solving (TRIZ). The tools under the CAI technology: How TO mode and science effects database could be very useful for all employee especially non-technical department and bottom line for innovation. The problem solving process put forward in the paper focus on non-technical department personnel especially bottom line employee for innovation.