Sample records for standard problem number

  1. The impact of two multiple-choice question formats on the problem-solving strategies used by novices and experts.

    PubMed

    Coderre, Sylvain P; Harasym, Peter; Mandin, Henry; Fick, Gordon

    2004-11-05

    Pencil-and-paper examination formats, and specifically the standard, five-option multiple-choice question, have often been questioned as a means for assessing higher-order clinical reasoning or problem solving. This study firstly investigated whether two paper formats with differing number of alternatives (standard five-option and extended-matching questions) can test problem-solving abilities. Secondly, the impact of the alternatives number on psychometrics and problem-solving strategies was examined. Think-aloud protocols were collected to determine the problem-solving strategy used by experts and non-experts in answering Gastroenterology questions, across the two pencil-and-paper formats. The two formats demonstrated equal ability in testing problem-solving abilities, while the number of alternatives did not significantly impact psychometrics or problem-solving strategies utilized. These results support the notion that well-constructed multiple-choice questions can in fact test higher order clinical reasoning. Furthermore, it can be concluded that in testing clinical reasoning, the question stem, or content, remains more important than the number of alternatives.

  2. Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Iatridou, Maria

    2010-01-01

    This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…

  3. Lepton number violation in theories with a large number of standard model copies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-03-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less

  4. A process for reaching standardization of word processing software for Sandia National Laboratories (Albuquerque) secretaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, S.R.

    1989-04-01

    In the summer of 1986, a number of problems being experienced by Sandia secretaries due to multiple word processing packages being used were brought to the attention of Sandia's upper management. This report discusses how these problems evolved, how management chose to correct the problem, and how standardization of word processing for Sandia secretaries was achieved. 11 refs.

  5. The Performance of Chinese Primary School Students on Realistic Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Xin, Ziqiang; Lin, Chongde; Zhang, Li; Yan, Rong

    2007-01-01

    Compared with standard arithmetic word problems demanding only the direct use of number operations and computations, realistic problems are harder to solve because children need to incorporate "real-world" knowledge into their solutions. Using the realistic word problem testing materials developed by Verschaffel, De Corte, and Lasure…

  6. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  7. Using Pinochle to Motivate the Restricted Combinations with Repetitions Problem

    ERIC Educational Resources Information Center

    Gorman, Patrick S.; Kunkel, Jeffrey D.; Vasko, Francis J.

    2011-01-01

    A standard example used in introductory combinatoric courses is to count the number of five-card poker hands possible from a straight deck of 52 distinct cards. A more interesting problem is to count the number of distinct hands possible from a Pinochle deck in which there are multiple, but obviously limited, copies of each type of card (two…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J E; Vassilevski, P S; Woodward, C S

    This paper provides extensions of an element agglomeration AMG method to nonlinear elliptic problems discretized by the finite element method on general unstructured meshes. The method constructs coarse discretization spaces and corresponding coarse nonlinear operators as well as their Jacobians. We introduce both standard (fairly quasi-uniformly coarsened) and non-standard (coarsened away) coarse meshes and respective finite element spaces. We use both kind of spaces in FAS type coarse subspace correction (or Schwarz) algorithms. Their performance is illustrated on a number of model problems. The coarsened away spaces seem to perform better than the standard spaces for problems with nonlinearities inmore » the principal part of the elliptic operator.« less

  9. Theory of wide-angle photometry from standard stars

    NASA Technical Reports Server (NTRS)

    Usher, Peter D.

    1989-01-01

    Wide angle celestial structures, such as bright comet tails and nearby galaxies and clusters of galaxies, rely on photographic methods for quantified morphology and photometry, primarily because electronic devices with comparable resolution and sky coverage are beyond current technological capability. The problem of the photometry of extended structures and of how this problem may be overcome through calibration by photometric standard stars is examined. The perfect properties of the ideal field of view are stated in the guise of a radiometric paraxial approximation, in the hope that fields of view of actual telescopes will conform. Fundamental radiometric concepts are worked through before the issue of atmospheric attenuation is addressed. The independence of observed atmospheric extinction and surface brightness leads off the quest for formal solutions to the problem of surface photometry. Methods and problems of solution are discussed. The spectre is confronted in the spirit of standard stars and shown to be chimerical in that light, provided certain rituals are adopted. After a brief discussion of Baker-Sampson polynomials and the vexing issue of saturation, a pursuit is made of actual numbers to be expected in real cases. While the numbers crunched are gathered ex nihilo, they demonstrate the feasibility of Newton's method in the solution of this overdetermined, nonlinear, least square, multiparametric, photometric problem.

  10. Research Problems Associated with Limiting the Applied Force in Vibration Tests and Conducting Base-Drive Modal Vibration Tests

    NASA Technical Reports Server (NTRS)

    Scharton, Terry D.

    1995-01-01

    The intent of this paper is to make a case for developing and conducting vibration tests which are both realistic and practical (a question of tailoring versus standards). Tests are essential for finding things overlooked in the analyses. The best test is often the most realistic test which can be conducted within the cost and budget constraints. Some standards are essential, but the author believes more in the individual's ingenuity to solve a specific problem than in the application of standards which reduce problems (and technology) to their lowest common denominator. Force limited vibration tests and base-drive modal tests are two examples of realistic, but practical testing approaches. Since both of these approaches are relatively new, a number of interesting research problems exist, and these are emphasized herein.

  11. Mastery Multiplied

    ERIC Educational Resources Information Center

    Shumway, Jessica F.; Kyriopoulos, Joan

    2014-01-01

    Being able to find the correct answer to a math problem does not always indicate solid mathematics mastery. A student who knows how to apply the basic algorithms can correctly solve problems without understanding the relationships between numbers or why the algorithms work. The Common Core standards require that students actually understand…

  12. Selective Optimization

    DTIC Science & Technology

    2015-07-06

    NUMBER 5b. GRANT NUMBER AFOSR FA9550-12-1-0154 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Shabbir Ahmed and Santanu S. Dey 5d. PROJECT NUMBER 5e. TASK...standard mixed-integer programming (MIP) formulations of selective optimization problems. While such formulations can be attacked by commercial...F33615-86-C-5169. 5b. GRANT NUMBER. Enter all grant numbers as they appear in the report, e.g. AFOSR-82-1234. 5c. PROGRAM ELEMENT NUMBER. Enter

  13. Materials and Process Specifications and Standards

    DTIC Science & Technology

    1977-11-01

    Integrity Requirements; Fracture Control 65 5.9.3 Some Special Problems in Electronic 66 Materials Specifications 5.9.3.1 Thermal Stresses 66...fatigue and fracture and by defining human engineering concepts. Conform to OSHA regulations such as toxicity, noise levels etc. Develop...Standardization Society of the Valves and Fittings Industry. 41 4.6.2.4 OTHER ORGANIZATIONS There are a number of standards-making organizations that cannot

  14. Teaching Science Problem Solving: An Overview of Experimental Work.

    ERIC Educational Resources Information Center

    Taconis, R.; Ferguson-Hessler, M. G. M.; Broekkamp, H.

    2001-01-01

    Performs analysis on a number of articles published between 1985 and 1995 describing experimental research into the effectiveness of a wide variety of teaching strategies for science problem solving. Identifies 22 articles describing 40 experiments that met standards for meta-analysis. Indicates that few of the independent variables were found to…

  15. The population problem: conceptions and misconceptions.

    PubMed

    Berelson, B

    1971-01-01

    Only 1 in about 110 sex acts results in a conception and 1 in 270 in a live birth. Of all conceptions, 40% result in live births, 5% in stillbirths, and 55% never develop. 1/3 of all known conceptions ends in abortion, spontaneous or induced. It appears that the population problem depends on a small fraction of the potential. Misconceptions of the problem are corrected, and it is emphasized that while no social problem facing the U.S. would be easier with a larger population, demographic factors do not cause all of the other problems. Increasing numbers are not as important as the rate of increase (2% annually worldwide). Today's population problem has been caused by a decreased death rate, not an increased birthrate. There are 2 kinds of countries in the world today: those with a high standard of living and low fertility and those with a low standard of living and high fertility. Most of the uninformed women of the world would not choose to have large numbers of children if they had a choice. Population density is not a problem in itself. Experts disagree, but it is improbable that large numbers of people will die of starvation in the next few decades. Environmental deterioration is more the result of modern economic and technological practices than of demographic factors. Efforts at fertility control are not aimed at minorities in this country and elsewhere. The poor are discriminated against in access to family planning services and abortion. Moslems of developing countries have higher fertility rates than Roman Catholics in developed countries. There would be many social costs if the U.S. were to achieve zero population growth in the near future. The population problem has implications for the future quality of life.

  16. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  17. International Comparisons of Foundation Phase Number Domain Mathematics Knowledge and Practice Standards

    ERIC Educational Resources Information Center

    Human, Anja; van der Walt, Marthie; Posthuma, Barbara

    2015-01-01

    Poor mathematics performance in schools is both a national and an international concern. Teachers ought to be equipped with relevant subject matter knowledge and pedagogical content knowledge as one way to address this problem. However, no mathematics knowledge and practice standards have as yet been defined for the preparation of Foundation Phase…

  18. Triplet supertree heuristics for the tree of life

    PubMed Central

    Lin, Harris T; Burleigh, J Gordon; Eulenstein, Oliver

    2009-01-01

    Background There is much interest in developing fast and accurate supertree methods to infer the tree of life. Supertree methods combine smaller input trees with overlapping sets of taxa to make a comprehensive phylogenetic tree that contains all of the taxa in the input trees. The intrinsically hard triplet supertree problem takes a collection of input species trees and seeks a species tree (supertree) that maximizes the number of triplet subtrees that it shares with the input trees. However, the utility of this supertree problem has been limited by a lack of efficient and effective heuristics. Results We introduce fast hill-climbing heuristics for the triplet supertree problem that perform a step-wise search of the tree space, where each step is guided by an exact solution to an instance of a local search problem. To realize time efficient heuristics we designed the first nontrivial algorithms for two standard search problems, which greatly improve on the time complexity to the best known (naïve) solutions by a factor of n and n2 (the number of taxa in the supertree). These algorithms enable large-scale supertree analyses based on the triplet supertree problem that were previously not possible. We implemented hill-climbing heuristics that are based on our new algorithms, and in analyses of two published supertree data sets, we demonstrate that our new heuristics outperform other standard supertree methods in maximizing the number of triplets shared with the input trees. Conclusion With our new heuristics, the triplet supertree problem is now computationally more tractable for large-scale supertree analyses, and it provides a potentially more accurate alternative to existing supertree methods. PMID:19208181

  19. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  20. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  1. [Evaluation of the standard application of Delphi in the diagnosis of chronic obstructive pulmonary disease caused by occupational irritant chemicals].

    PubMed

    Zhao, L; Yan, Y J

    2017-11-20

    Objective: To investigate the problems encountered in the application of the standard (hereinafter referred to as standard) for the diagnosis of chronic obstructive pulmonary disease caused by occu-pational irritant chemicals, to provide reference for the revision of the new standard, to reduce the number of missed patients in occupational COPD, and to get rid of the working environment of those who suffer from chronic respiratory diseases due to long-term exposure to poisons., slowing the progression of the disease. Methods: Using Delphi (Delphi) Expert research method, after the senior experts to demonstrate, to under-stand the GBZ 237-2011 "occupational irritant chemicals to the diagnosis of chronic obstructive pulmonary dis-ease" standard evaluation of the system encountered problems, to seek expert advice, The problems encoun-tered during the clinical implementation of the standards promulgated in 2011 are presented. Results: Through the Delphi Expert investigation method, it is found that experts agree on the content evaluation and implemen-tation evaluation in the standard, but the operational evaluation of the standard is disputed. According to the clinical experience, the experts believe that the range of occupational irritant gases should be expanded, and the operation of the problem of smoking, seniority determination and occupational contact history should be challenged during the diagnosis. Conclusions: Since the promulgation in 2011 of the criteria for the diagnosis of chronic obstructive pulmonary disease caused by occupational stimulant chemicals, there have been some problems in the implementation process, which have caused many occupationally exposed to irritating gases to suffer from "occupational chronic respiratory Diseases" without a definitive diagnosis.

  2. Stencils and problem partitionings: Their influence on the performance of multiple processor systems

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Adams, L. M.; Patrick, M. L.

    1986-01-01

    Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.

  3. Streaming PCA with many missing entries.

    DOT National Transportation Integrated Search

    2015-12-01

    This paper considers the problem of matrix completion when some number of the columns are : completely and arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard : algorithms for matrix completion can return arbit...

  4. Dependability of technical items: Problems of standardization

    NASA Astrophysics Data System (ADS)

    Fedotova, G. A.; Voropai, N. I.; Kovalev, G. F.

    2016-12-01

    This paper is concerned with problems blown up in the development of a new version of the Interstate Standard GOST 27.002 "Industrial product dependability. Terms and definitions". This Standard covers a wide range of technical items and is used in numerous regulations, specifications, standard and technical documentation. A currently available State Standard GOST 27.002-89 was introduced in 1990. Its development involved a participation of scientists and experts from different technical areas, its draft was debated in different audiences and constantly refined, so it was a high quality document. However, after 25 years of its application it's become necessary to develop a new version of the Standard that would reflect the current understanding of industrial dependability, accounting for the changes taking place in Russia in the production, management and development of various technical systems and facilities. The development of a new version of the Standard makes it possible to generalize on a terminological level the knowledge and experience in the area of reliability of technical items, accumulated over a quarter of the century in different industries and reliability research schools, to account for domestic and foreign experience of standardization. Working on the new version of the Standard, we have faced a number of issues and problems on harmonization with the International Standard IEC 60500-192, caused first of all by different approaches to the use of terms and differences in the mentalities of experts from different countries. The paper focuses on the problems related to the chapter "Maintenance, restoration and repair", which caused difficulties for the developers to harmonize term definitions both with experts and the International Standard, which is mainly related to differences between the Russian concept and practice of maintenance and repair and foreign ones.

  5. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  6. Gravitational Field as a Pressure Force from Logarithmic Lagrangians and Non-Standard Hamiltonians: The Case of Stellar Halo of Milky Way

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-03-01

    Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.

  7. Design and Implementation of USAF Avionics Integration Support Facilities

    DTIC Science & Technology

    1981-12-01

    specification for taking the bbranch Vt -Routing indicator (No activity): Allocate Node: All’ocation of resources: R= Allocation rule. Res Resource type number...problems, and the integration and testing of the ECS. The purpose of this investigation is to establish a standard software development system...Corrections to equipment problems. -Compensation for equipment degradation. -New Developments . This approach is intended to centralize essential

  8. Visual Reasoning Tools in Action: Double Number Lines, Area Models, and Other Diagrams Power Up Students' Ability to Solve and Make Sense of Various Problems

    ERIC Educational Resources Information Center

    Watanabe, Tad

    2015-01-01

    The Common Core State Standards for Mathematics (CCSSM) (CCSSI 2010) identifies the strategic use of appropriate tools as one of the mathematical practices and emphasizes the use of pictures and diagrams as reasoning tools. Starting with the early elementary grades, CCSSM discusses students' solving of problems "by drawing." In later…

  9. Dynamic simulation solves process control problem in Oman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-11-16

    A dynamic simulation study solved the process control problems for a Saih Rawl, Oman, gas compressor station operated by Petroleum Development of Oman (PDO). PDO encountered persistent compressor failure that caused frequent facility shutdowns, oil production deferment, and gas flaring. It commissioned MSE (Consultants) Ltd., U.K., to find a solution for the problem. Saih Rawl, about 40 km from Qarn Alam, produces oil and associated gas from a large number of low and high-pressure wells. Oil and gas are separated in three separators. The oil is pumped to Qarn Alam for treatment and export. Associated gas is compressed in twomore » parallel trains. Train K-1115 is a 350,000 standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage centrifugal compressor driven by a variable-speed motor. The paper describes tripping and surging problems with the gas compressor and the control simplifications that solved the problem.« less

  10. Using Pinochle to motivate the restricted combinations with repetitions problem

    NASA Astrophysics Data System (ADS)

    Gorman, Patrick S.; Kunkel, Jeffrey D.; Vasko, Francis J.

    2011-07-01

    A standard example used in introductory combinatoric courses is to count the number of five-card poker hands possible from a straight deck of 52 distinct cards. A more interesting problem is to count the number of distinct hands possible from a Pinochle deck in which there are multiple, but obviously limited, copies of each type of card (two copies for single-deck, four for double deck). This problem is more interesting because our only concern is to count the number of distinguishable hands that can be dealt. In this note, under various scenarios, we will discuss two combinatoric techniques for counting these hands; namely, the inclusion-exclusion principle and generating functions. We will then show that these Pinochle examples motivate a general counting formula for what are called 'regular' combinations by Riordan. Finally, we prove the correctness of this formula using generating functions.

  11. Frequency assignments for HFDF receivers in a search and rescue network

    NASA Astrophysics Data System (ADS)

    Johnson, Krista E.

    1990-03-01

    This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.

  12. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  13. Data: The Common Thread & Tie That Binds Exposure Science

    EPA Science Inventory

    While a number of ongoing efforts exist aimed at empirically measuring or modeling exposure data, problems persist regarding availability and access to this data. Innovations in managing proprietary data, establishing data quality, standardization of data sets, and sharing of exi...

  14. Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Stephen Andrew

    1997-11-01

    Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.

  15. Changes in psychiatric symptoms among persons with methamphetamine dependence predicts changes in severity of drug problems but not frequency of use.

    PubMed

    Polcin, Douglas L; Korcha, Rachael; Bond, Jason; Galloway, Gantt; Nayak, Madhabika

    2016-01-01

    Few studies have examined how changes in psychiatric symptoms over time are associated with changes in drug use and severity of drug problems. No studies have examined these relationships among methamphetamine (MA)-dependent persons receiving motivational interviewing within the context of standard outpatient treatment. Two hundred seventeen individuals with MA dependence were randomly assigned to a standard single session of motivational interviewing (MI) or an intensive 9-session model of MI. Both groups received standard outpatient group treatment. The Addiction Severity Index (ASI) and timeline follow-back (TLFB) for MA use were administered at treatment entry and 2-, 4-, and 6-month follow-ups. Changes in ASI psychiatric severity between baseline and 2 months predicted changes in ASI drug severity during the same time period, but not changes on measures of MA use. Item analysis of the ASI drug scale showed that psychiatric severity predicted how troubled or bothered participants were by their drug us, how important they felt it was for them to get treatment, and the number of days they experienced drug problems. However, it did not predict the number days they used drugs in the past 30 days. These associations did not differ between study conditions, and they persisted when psychiatric severity and outcomes were compared across 4- and 6-month time periods. Results are among the first to track how changes in psychiatric severity over time are associated with changes in MA use and severity of drug problems. Treatment efforts targeting reduction of psychiatric symptoms among MA-dependent persons might be helpful in reducing the level of distress and problems associated with MA use but not how often it is used. There is a need for additional research describing the circumstances under which the experiences and perceptions of drug-related problems diverge from frequency of consumption.

  16. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  17. Cluster Stability Estimation Based on a Minimal Spanning Trees Approach

    NASA Astrophysics Data System (ADS)

    Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora

    2009-08-01

    Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.

  18. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  19. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  20. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  1. Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations

    NASA Astrophysics Data System (ADS)

    Carter, Jeffrey R.; Simon, Wayne E.

    1990-08-01

    Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and

  2. Why threefold-replication of families?

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, Gerald L.

    1998-04-01

    In spite of the many successes of the standard model of particle physics, the observed proliferation of matter-fields, in the form of ``replicated'' generations or families, is a major unsolved problem. In this paper, I explore some of the algebraic, geometric and physical consequences of a new organizing principle for fundamental fermions (quarks and leptons)(Gerald L. Fitzpatrick, phThe Family Problem--New Internal Algebraic and Geometric Regularities), Nova Scientific Press, Issaquah, Washington, 1997. Read more about this book (ISBN 0--9655695--0--0) and its subject matter at: http://www.tp.umu.se/TIPTOP and/or http://www.amazon.com.. The essence of the new organizing principle is the idea that the standard-model concept of scalar fermion numbers f can be generalized. In particular, a ``generalized fermion number,'' which consists of a 2× 2 matrix F that ``acts'' on an internal 2-space, instead of spacetime, is taken to describe certain internal properties of fundamental fermions. This generalization automatically introduces internal degrees of freedom that ``explain,'' among other things, family replication and the number (three) of families observed in nature.

  3. Raptor electrocution on power lines: Current issues and outlook

    USGS Publications Warehouse

    Lehman, Robert N.

    2001-01-01

    Electrocution on power lines is one of many human-caused mortality factors that affect raptors. Cost-effective and relatively simple raptor-safe standards for power line modification and construction have been available for over 25 years. During the 1970s and early 1980s, electric industry efforts to reduce raptor electrocutions were very coordinated and proactive, but predictions about resolving the problem were overly optimistic. Today, raptors continue to be electrocuted, possibly in large numbers. The electrocution problem has not been resolved, partly because of the sheer number of potentially lethal power poles in use and partly because electrocution risks may be more pervasive and sometimes less conspicuous than once believed. Also, responses to the problem by individual utilities have not been uniform, and deregulation of the electric industry during the 1990s may have deflected attention from electrocution issues. To control raptor electrocutions in the future, the industry must increase information sharing and technology transfer, increase efforts to retrofit lethal power poles, and above all ensure that every new and replacement line constructed incorporates raptor-safe standards at all phases of development. Finally, responsibility for the electrocution problem must be shared. Federal, state, and local governments, academic institutions, the conservation community, and the consumer all can play critical roles in an effort that will, by necessity, extend well into the new century.

  4. Early Mathematics Fluency with CCSSM

    ERIC Educational Resources Information Center

    Matney, Gabriel T.

    2014-01-01

    To develop second-grade students' confidence and ease, this author presents examples of learning tasks (Number of the Day, Word Problem Solving, and Modeling New Mathematical Ideas) that align with Common Core State Standards for Mathematics and that build mathematical fluency to promote students' creative expression of mathematical…

  5. Partial Least Square Analyses of Landscape and Surface Water Biota Associations in the Savannah River Basin

    EPA Science Inventory

    Ecologists are often faced with problem of small sample size, correlated and large number of predictors, and high noise-to-signal relationships. This necessitates excluding important variables from the model when applying standard multiple or multivariate regression analyses. In ...

  6. Axions, Inflation and String Theory

    NASA Astrophysics Data System (ADS)

    Mack, Katherine J.; Steinhardt, P. J.

    2009-01-01

    The QCD axion is the leading contender to rid the standard model of the strong-CP problem. If the Peccei-Quinn symmetry breaking occurs before inflation, which is likely in string theory models, axions manifest themselves cosmologically as a form of cold dark matter with a density determined by the axion's initial conditions and by the energy scale of inflation. Constraints on the dark matter density and on the amplitude of CMB isocurvature perturbations currently demand an exponential degree of fine-tuning of both axion and inflationary parameters beyond what is required for particle physics. String theory models generally produce large numbers of axion-like fields; the prospect that any of these fields exist at scales close to that of the QCD axion makes the problem drastically worse. I will discuss the challenge of accommodating string-theoretic axions in standard inflationary cosmology and show that the fine-tuning problems cannot be fully addressed by anthropic principle arguments.

  7. Too easily lead? Health effects of gasoline additives.

    PubMed Central

    Menkes, D B; Fawcett, J P

    1997-01-01

    Octane-enhancing constituents of gasoline pose a number of health hazards. This paper considers the relative risks of metallic (lead, manganese), aromatic (e.g., benzene), and oxygenated additives in both industrialized and developing countries. Technological advances, particularly in industrialized countries, have allowed the progressive removal of lead from gasoline and the increased control of exhaust emissions. The developing world, by contrast, has relatively lax environmental standards and faces serious public health problems from vehicle exhaust and the rapid increase in automobile use. Financial obstacles to the modernization of refineries and vehicle fleets compound this problem and the developing world continues to import large quantities of lead additives and other hazardous materials. Progress in decreasing environmental health problems depends both on the adoption of international public health standards as well as efforts to decrease dependence on the private automobile for urban transport. Images Figure 1. Figure 2. PMID:9171982

  8. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  9. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  10. Standard Model—axion—seesaw—Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    NASA Astrophysics Data System (ADS)

    Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas; Tamarit, Carlos

    2017-08-01

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos Ni, a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value vσ ~ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.

  11. Geometric representation methods for multi-type self-defining remote sensing data sets

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1980-01-01

    Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.

  12. The Root of the Problem

    ERIC Educational Resources Information Center

    Grosser-Clarkson, Dana L.

    2015-01-01

    The Common Core State Standards for Mathematics expect students to build on their knowledge of the number system, expressions and equations, and functions throughout school mathematics. For example, students learn that they can add something to both sides of an equation and that doing so will not affect the equivalency; however, squaring both…

  13. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  14. Tenure and Employment Contracts: Evolving Standards for Principals. A Legal Memorandum.

    ERIC Educational Resources Information Center

    Buckner, Kermit

    Decision makers in education frequently identify tenure laws as a barrier to improving student achievement. Many principals believe statutes and case law adequately protect employees, and join others calling for modifications in tenure policies. The inadequate number of qualified applicants for principal positions is a national problem that…

  15. The Problem of Grade Inflation.

    ERIC Educational Resources Information Center

    Cahn, Steven M.

    A number of factors have contributed to the inflation of grades in higher education, including: the belief that grades traumatize and dehumanize students; the conviction that academic standards are unfair in light of the equality of each individual; teachers' hesitation to fail high-risk or open enrollment students; the influence of popular…

  16. AGARD standard aeroelastic configurations for dynamic response. Candidate configuration I.-wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1987-01-01

    To promote the evaluation of existing and emerging unsteady aerodynamic codes and methods for applying them to aeroelastic problems, especially for the transonic range, a limited number of aerodynamic configurations and experimental dynamic response data sets are to be designated by the AGARD Structures and Materials Panel as standards for comparison. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. This report presents the information needed to perform flutter calculations for the first candidate standard configuration for dynamic response along with the related experimental flutter data.

  17. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  18. Journal of the United States Artillery. Volume 56, Number 1, January 1922

    DTIC Science & Technology

    1922-01-01

    its develop- ment and recognitio ,n as a real component of the Army of the United States. At the same time, many of us, believing the National Guard...exist in the Ninth Corps Area it is believed that similar conditions and problems are being faced in other Corps Areas. DECEXTRALIZATIO~ SUGGESTIOX1.-It...Guard organizations should be measured by the same standard. On the face of it, if the standard as to instant readiness and variety and complexity in

  19. The Shock and Vibration Digest. Volume 18, Number 4

    DTIC Science & Technology

    1986-04-01

    determined by this procedure decreases though this equation includes several standard with the square toot of the numbers of records problems, for...earthquake design for nuclear 86-781 power plants in the FRG are recorded in the Vibratos of Nudear Fuel Assemblies: A Simpli- L German nuclear safety...Publications and Printing Pol- icy Committee. SVIC NOTES MANY THANKS"’-’ .e " ., On behalf of the Shock and Vibration Information Center, I wish to

  20. Direct and Indirect Effects of Behavioral Parent Training on Infant Language Production

    PubMed Central

    Bagner, Daniel M.; Garcia, Dainelys; Hill, Ryan

    2016-01-01

    Given the strong association between early behavior problems and language impairment, we examined the effect of a brief home-based adaptation of Parent–child Interaction Therapy on infant language production. Sixty infants (55% male; mean age 13.47 ± 1.31 months) were recruited at a large urban primary care clinic and were included if their scores exceeded the 75th percentile on a brief screener of early behavior problems. Families were randomly assigned to receive the home-based parenting intervention or standard pediatric primary care. The observed number of infant total (i.e., token) and different (i.e., type) utterances spoken during an observation of an infant-led play and a parent-report measure of infant externalizing behavior problems were examined at pre- and post-intervention and at 3- and 6-month follow-ups. Infants receiving the intervention demonstrated a significantly higher number of observed different and total utterances at the 6-month follow-up compared to infants in standard care. Furthermore, there was an indirect effect of the intervention on infant language production, such that the intervention led to decreases in infant externalizing behavior problems from pre- to post-intervention, which, in turn, led to increases in infant different utterances at the 3- and 6-month follow-ups and total utterances at the 6-month follow-up. Results provide initial evidence for the effect of this brief and home-based intervention on infant language production, including the indirect effect of the intervention on infant language through improvements in infant behavior, highlighting the importance of targeting behavior problems in early intervention. PMID:26956651

  1. A School Voucher Program for Baltimore City

    ERIC Educational Resources Information Center

    Lips, Dan

    2005-01-01

    Baltimore City's public school system is in crisis. Academically, the school system fails on any number of measures. The city's graduation rate is barely above 50 percent and students continually lag well behind state averages on standardized tests. Adding to these problems is the school system's current fiscal crisis, created by years of fiscal…

  2. The Price of a Good Education

    ERIC Educational Resources Information Center

    Schachter, Ron

    2010-01-01

    There are plenty of statistics available for measuring the performance, potential and problems of school districts, from standardized test scores to the number of students eligible for free or reduced-price lunch. Last June, another metric came into sharper focus when the U.S. Census Bureau released its latest state-by-state data on per-pupil…

  3. Designing a VOIP Based Language Test

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L.

    2015-01-01

    Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…

  4. Teacher Technology Acceptance and Usage for the Middle School Classroom

    ERIC Educational Resources Information Center

    Stone, Wilton, Jr.

    2014-01-01

    According to the U.S. Department of Education National Center for Education Statistics, students in the United States routinely perform poorly on international assessments. This study was focused specifically on the problem of the decrease in the number of middle school students meeting the requirements for one state's standardized tests for…

  5. Ethical issues in health workforce development.

    PubMed Central

    Cash, Richard

    2005-01-01

    Increasing the numbers of health workers and improving their skills requires that countries confront a number of ethical dilemmas. The ethical considerations in answering five important questions on enabling health workers to deal appropriately with the circumstances in which they must work are described. These include the problems of the standards of training and practice required in countries with differing levels of socioeconomic development and different priority diseases; how a society can be assured that health practitioners are properly trained; how a health system can support its workers; diversion of health workers and training institutions; and the teaching of ethical principles to student health workers. The ethics of setting standards for the skills and care provided by traditional health-care practitioners are also discussed. PMID:15868019

  6. Properties of wavelet discretization of Black-Scholes equation

    NASA Astrophysics Data System (ADS)

    Finěk, Václav

    2017-07-01

    Using wavelet methods, the continuous problem is transformed into a well-conditioned discrete problem. And once a non-symmetric problem is given, squaring yields a symmetric positive definite formulation. However squaring usually makes the condition number of discrete problems substantially worse. This note is concerned with a wavelet based numerical solution of the Black-Scholes equation for pricing European options. We show here that in wavelet coordinates a symmetric part of the discretized equation dominates over an unsymmetric part in the standard economic environment with low interest rates. It provides some justification for using a fractional step method with implicit treatment of the symmetric part of the weak form of the Black-Scholes operator and with explicit treatment of its unsymmetric part. Then a well-conditioned discrete problem is obtained.

  7. An improved random walk algorithm for the implicit Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less

  8. Standard Model–axion–seesaw–Higgs portal inflation. Five problems of particle physics and cosmology solved in one stroke

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas

    We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos N {sub i} , a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value v {sub σ} ∼ 10{sup 11} GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CPmore » problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.« less

  9. Teaching cross-cultural communication skills online: a multi-method evaluation.

    PubMed

    Lee, Amy L; Mader, Emily M; Morley, Christopher P

    2015-04-01

    Cultural competency education is an important and required part of undergraduate medical education. The objective of this study was to evaluate whether an online cross-cultural communication module could increase student use of cross-cultural communication questions that assess the patient's definition of the problem, the way the problem affects their life, their concerns about the problem, and what the treatment should be (PACT). We used multi-method assessment of students assigned to family medicine clerkship blocks that were randomized to receive online cultural competency and PACT training added to their standard curriculum or to a control group receiving the standard curriculum only. Outcomes included comparison, via analysis of variance, of number of PACT questions used during an observed Standardized Patient Exercise, end-of-year OSCE scores, and qualitative analysis of student narratives. Students (n=119) who participated in the online module (n=60) demonstrated increased use of cross-cultural communication PACT questions compared to the control group (n=59) and generally had positive themes emerge from their reflective writing. The module had the biggest impact on students who later went on to match in high communication specialties. Online teaching of cross-cultural communication skills can be effective at changing medical student behavior.

  10. [Inspection by infection control team of the University Hospital, Faculty of Dentistry, Tokyo Medical and Dental University].

    PubMed

    Sunakawa, Mitsuhiro; Matsumoto, Hiroyuki; Harasawa, Hideki; Tsukikawa, Wakana; Takagi, Yuzo; Suda, Hideaki

    2006-06-01

    Factors affecting infection are the existence of infectious microorganisms, sensitivity of hosts, number of microorganisms, and infectious routes. Efforts to prevent infection focus on not allowing these factors to reach the threshold level. Inspection by an infection control team (ICT) of a hospital is one countermeasure for preventing nosocomial infection. We summarize here the problems for complete prevention of nosocomial infection based on the results of inspection by our ICT, so that staff working in the hospital can recognize the importance of preventing nosocomial infection. The following were commonly observed problems in our clinics found by the ICT : (1) incomplete practice of standard precautions and/or isolation precautions, (2) noncompliance with guidelines for the prevention of cross-infection, and (3) inappropriate management of medical rejectamenta. Infection control can be accomplished by strictly observing the standard precautions and isolation precautions. The ICT inspection round in the hospital could be an effective metaff working in the hod to clarify and overcome the problems involved in infection.

  11. Reliability of engineering methods of assessment the critical buckling load of steel beams

    NASA Astrophysics Data System (ADS)

    Rzeszut, Katarzyna; Folta, Wiktor; Garstecki, Andrzej

    2018-01-01

    In this paper the reliability assessment of buckling resistance of steel beam is presented. A number of parameters such as: the boundary conditions, the section height to width ratio, the thickness and the span are considered. The examples are solved using FEM procedures and formulas proposed in the literature and standards. In the case of the numerical models the following parameters are investigated: support conditions, mesh size, load conditions, steel grade. The numerical results are compared with approximate solutions calculated according to the standard formulas. It was observed that for high slenderness section the deformation of the cross-section had to be described by the following modes: longitudinal and transverse displacement, warping, rotation and distortion of the cross section shape. In this case we face interactive buckling problem. Unfortunately, neither the EN Standards nor the subject literature give close-form formulas to solve these problems. For this reason the reliability of the critical bending moment calculations is discussed.

  12. System of HPC content archiving

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Ivashchenko, A.

    2017-12-01

    This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.

  13. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  14. Hot air drum evaporator. [Patent application

    DOEpatents

    Black, R.L.

    1980-11-12

    An evaporation system for aqueous radioactive waste uses standard 30 and 55 gallon drums. Waste solutions form cascading water sprays as they pass over a number of trays arranged in a vertical stack within a drum. Hot dry air is circulated radially of the drum through the water sprays thereby removing water vapor. The system is encased in concrete to prevent exposure to radioactivity. The use of standard 30 and 55 gallon drums permits an inexpensive compact modular design that is readily disposable, thus eliminating maintenance and radiation build-up problems encountered with conventional evaporation systems.

  15. Hot air drum evaporator

    DOEpatents

    Black, Roger L.

    1981-01-01

    An evaporation system for aqueous radioactive waste uses standard 30 and 55 gallon drums. Waste solutions form cascading water sprays as they pass over a number of trays arranged in a vertical stack within a drum. Hot dry air is circulated radially of the drum through the water sprays thereby removing water vapor. The system is encased in concrete to prevent exposure to radioactivity. The use of standard 30 and 55 gallon drums permits an inexpensive compact modular design that is readily disposable, thus eliminating maintenance and radiation build-up problems encountered with conventional evaporation systems.

  16. User Satisfaction and Service Transactions for a Reference Department in an Illinois Community College Learning Resources Center.

    ERIC Educational Resources Information Center

    Cornish, Nancy M.

    Users of the reference library at Blackhawk Community College (Illinois) were surveyed to determine user satisfaction and the total number of transactions. The survey's objective was to pinpoint problem areas, supply objective information, develop guidelines and standards, and support needed improvements or the continued maintenance of good…

  17. Developing and Planning a Texas Based Homeschool Curriculum

    ERIC Educational Resources Information Center

    Terry, Bobby K.

    2011-01-01

    Texas has some of the lowest SAT scores in the nation. They are ranked 36th nationwide in graduation rates and teacher salaries rank at number 33. The public school system in Texas has problems with overcrowding, violence, and poor performance on standardized testing. Currently 300,000 families have opted out of the public school system in order…

  18. Use of Standardized Test Scores to Predict Success in a Computer Applications Course

    ERIC Educational Resources Information Center

    Harris, Robert V.

    2014-01-01

    In this educational study, the research problem was that each semester a variable number of community college students are unable to complete an introductory computer applications course at a community college in the state of Mississippi with a successful course letter grade. Course failure, or non-success, at the collegiate level is a negative…

  19. A Study of Three Intrinsic Problems of the Classic Discrete Element Method Using Flat-Joint Model

    NASA Astrophysics Data System (ADS)

    Wu, Shunchuan; Xu, Xueliang

    2016-05-01

    Discrete element methods have been proven to offer a new avenue for obtaining the mechanics of geo-materials. The standard bonded-particle model (BPM), a classic discrete element method, has been applied to a wide range of problems related to rock and soil. However, three intrinsic problems are associated with using the standard BPM: (1) an unrealistically low unconfined compressive strength to tensile strength (UCS/TS) ratio, (2) an excessively low internal friction angle, and (3) a linear strength envelope, i.e., a low Hoek-Brown (HB) strength parameter m i . After summarizing the underlying reasons of these problems through analyzing previous researchers' work, flat-joint model (FJM) is used to calibrate Jinping marble and is found to closely match its macro-properties. A parametric study is carried out to systematically evaluate the micro-parameters' effect on these three macro-properties. The results indicate that (1) the UCS/TS ratio increases with the increasing average coordination number (CN) and bond cohesion to tensile strength ratio, but it first decreases and then increases with the increasing crack density (CD); (2) the HB strength parameter m i has positive relationships to the crack density (CD), bond cohesion to tensile strength ratio, and local friction angle, but a negative relationship to the average coordination number (CN); (3) the internal friction angle increases as the crack density (CD), bond cohesion to tensile strength ratio, and local friction angle increase; (4) the residual friction angle has little effect on these three macro-properties and mainly influences post-peak behavior. Finally, a new calibration procedure is developed, which not only addresses these three problems, but also considers the post-peak behavior.

  20. Direct and Indirect Effects of Behavioral Parent Training on Infant Language Production.

    PubMed

    Bagner, Daniel M; Garcia, Dainelys; Hill, Ryan

    2016-03-01

    Given the strong association between early behavior problems and language impairment, we examined the effect of a brief home-based adaptation of Parent-child Interaction Therapy on infant language production. Sixty infants (55% male; mean age 13.47±1.31 months) were recruited at a large urban primary care clinic and were included if their scores exceeded the 75th percentile on a brief screener of early behavior problems. Families were randomly assigned to receive the home-based parenting intervention or standard pediatric primary care. The observed number of infant total (i.e., token) and different (i.e., type) utterances spoken during an observation of an infant-led play and a parent-report measure of infant externalizing behavior problems were examined at pre- and post-intervention and at 3- and 6-month follow-ups. Infants receiving the intervention demonstrated a significantly higher number of observed different and total utterances at the 6-month follow-up compared to infants in standard care. Furthermore, there was an indirect effect of the intervention on infant language production, such that the intervention led to decreases in infant externalizing behavior problems from pre- to post-intervention, which, in turn, led to increases in infant different utterances at the 3- and 6-month follow-ups and total utterances at the 6-month follow-up. Results provide initial evidence for the effect of this brief and home-based intervention on infant language production, including the indirect effect of the intervention on infant language through improvements in infant behavior, highlighting the importance of targeting behavior problems in early intervention. Copyright © 2015. Published by Elsevier Ltd.

  1. User Interface Problems of a Nationwide Inpatient Information System: A Heuristic Evaluation.

    PubMed

    Atashi, Alireza; Khajouei, Reza; Azizi, Amirabbas; Dadashi, Ali

    2016-01-01

    While studies have shown that usability evaluation could uncover many design problems of health information systems, the usability of health information systems in developing countries using their native language is poorly studied. The objective of this study was to evaluate the usability of a nationwide inpatient information system used in many academic hospitals in Iran. Three trained usability evaluators independently evaluated the system using Nielsen's 10 usability heuristics. The evaluators combined identified problems in a single list and independently rated the severity of the problems. We statistically compared the number and severity of problems identified by HIS experienced and non-experienced evaluators. A total of 158 usability problems were identified. After removing duplications 99 unique problems were left. The highest mismatch with usability principles was related to "Consistency and standards" heuristic (25%) and the lowest related to "Flexibility and efficiency of use" (4%). The average severity of problems ranged from 2.4 (Major problem) to 3.3 (Catastrophe problem). The experienced evaluator with HIS identified significantly more problems and gave higher severities to problems (p<0.02). Heuristic Evaluation identified a high number of usability problems in a widely used inpatient information system in many academic hospitals. These problems, if remain unsolved, may waste users' and patients' time, increase errors and finally threaten patient's safety. Many of them can be fixed with simple redesign solutions such as using clear labels and better layouts. This study suggests conducting further studies to confirm the findings concerning effect of evaluator experience on the results of Heuristic Evaluation.

  2. High performance genetic algorithm for VLSI circuit partitioning

    NASA Astrophysics Data System (ADS)

    Dinu, Simona

    2016-12-01

    Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.

  3. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  4. Comparison of eigensolvers for symmetric band matrices.

    PubMed

    Moldaschl, Michael; Gansterer, Wilfried N

    2014-09-15

    We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.

  5. Multiple Linear Regression Analysis of Factors Affecting Real Property Price Index From Case Study Research In Istanbul/Turkey

    NASA Astrophysics Data System (ADS)

    Denli, H. H.; Koc, Z.

    2015-12-01

    Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.

  6. The effects of a shared, Intranet science learning environment on the academic behaviors of problem-solving and metacognitive reflection

    NASA Astrophysics Data System (ADS)

    Parker, Mary Jo

    This study investigated the effects of a shared, Intranet science environment on the academic behaviors of problem-solving and metacognitive reflection. Seventy-eight subjects included 9th and 10th grade male and female biology students. A quasi-experimental design with pre- and post-test data collection and randomization occurring through assignment of biology classes to traditional or shared, Intranet learning groups was employed. Pilot, web-based distance education software (CourseInfo) created the Intranet learning environment. A modified ecology curriculum provided contextualization and content for traditional and shared learning environments. The effect of this environment on problem-solving, was measured using the standardized Watson-Glaser Critical Thinking Appraisal test. Metacognitive reflection, was measured in three ways: (a) number of concepts used, (b) number of concept links noted, and (c) number of concept nodes noted. Visual learning software, Inspiration, generated concept maps. Secondary research questions evaluated the pilot CourseInfo software for (a) tracked user movement, (b) discussion forum findings, and (c) difficulties experienced using CourseInfo software. Analysis of problem-solving group means reached no levels of significance resulting from the shared, Intranet environment. Paired t-Test of individual differences in problem-solving reached levels of significance. Analysis of metacognitive reflection by number of concepts reached levels of significance. Metacognitive reflection by number of concept links noted also reach significance. No significance was found for metacognitive reflection by number of concept nodes. No gender differences in problem-solving ability and metacognitive reflection emerged. Lack of gender differences in the shared, Intranet environment strongly suggests an equalizing effect due to the cooperative, collaborative nature of Intranet environments. Such environments appeal to, and rank high with, the female gender. Tracking learner movements in web-based, science environments has metacognitive and problem-solving learner implications. CourseInfo software offers one method of informing instruction within web-based learning environments focusing on academic behaviors. A shared, technology-supported learning environment may pose one model which science classrooms can use to create equitable scientific study across gender. The lack of significant differences resulting from this environment presents one model for improvement of individual problem-solving ability and metacognitive reflection across gender.

  7. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  8. Incarceration and Adult Felon Probation in Texas: A Cost Comparison. Criminal Justice Monograph. Volume IV, Number 3.

    ERIC Educational Resources Information Center

    Fraizer, Robert Lee; And Others

    In attempting to determine the cost of probation and the cost of incarceration of adult felons in Texas, it was discovered that there were no comparative figures available. A search of the literature was conducted to determine the proper standards for probation caseload management and to identify problems associated with previous cost studies. In…

  9. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  10. A Multi-Objective, Hub-and-Spoke Supply Chain Design Model For Densified Biomass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Md S. Roni; Sandra Eksioglu; Kara G. Cafferty

    In this paper we propose a model to design the supply chain for densified biomass. Rail is typically used for long-haul, high-volume shipment of densified biomass. This is the reason why a hub-and-spoke network structure is used to model this supply chain. The model is formulated as a multi-objective, mixed-integer programing problem under economic, environmental, and social criteria. The goal is to identify the feasibility of meeting the Renewable Fuel Standard (RFS) by using biomass for production of cellulosic ethanol. The focus in not just on the costs associated with meeting these standards, but also exploring the social and environmentalmore » benefits that biomass production and processing offers by creating new jobs and reducing greenhouse gas (GHG) emissions. We develop an augmented ?-constraint method to find the exact Pareto solution to this optimization problem. We develop a case study using data from the Mid-West. The model identifies the number, capacity and location of biorefineries needed to make use of the biomass available in the region. The model estimates the delivery cost of cellulosic ethanol under different scenario, the number new jobs created and the GHG emission reductions in the supply chain.« less

  11. A Multi-Objective, Hub-and-Spoke Supply Chain Design Model for Densified Biomass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob J. Jacobson; Md. S. Roni; Kara G. Cafferty

    In this paper we propose a model to design the supply chain for densified biomass. Rail is typically used for longhaul, high-volume shipment of densified biomass. This is the reason why a hub-and-spoke network structure is used to model this supply chain. The model is formulated as a multi-objective, mixed-integer programing problem under economic, environmental, and social criteria. The goal is to identify the feasibility of meeting the Renewable Fuel Standard (RFS) by using biomass for production of cellulosic ethanol. The focus is not just on the costs associated with meeting these standards, but also exploring the social and environmentalmore » benefits that biomass production and processing offers by creating new jobs and reducing greenhouse gas (GHG) emissions. We develop an augmented ?-constraint method to find the exact Pareto solution to this optimization problem. We develop a case study using data from the Mid-West. The model identifies the number, capacity and location of biorefineries needed to make use of the biomass available in the region. The model estimates the delivery cost of cellulosic ethanol under different scenario, the number new jobs created and the GHG emission reductions in the supply chain.« less

  12. A Unified Framework for Association Analysis with Multiple Related Phenotypes

    PubMed Central

    Stephens, Matthew

    2013-01-01

    We consider the problem of assessing associations between multiple related outcome variables, and a single explanatory variable of interest. This problem arises in many settings, including genetic association studies, where the explanatory variable is genotype at a genetic variant. We outline a framework for conducting this type of analysis, based on Bayesian model comparison and model averaging for multivariate regressions. This framework unifies several common approaches to this problem, and includes both standard univariate and standard multivariate association tests as special cases. The framework also unifies the problems of testing for associations and explaining associations – that is, identifying which outcome variables are associated with genotype. This provides an alternative to the usual, but conceptually unsatisfying, approach of resorting to univariate tests when explaining and interpreting significant multivariate findings. The method is computationally tractable genome-wide for modest numbers of phenotypes (e.g. 5–10), and can be applied to summary data, without access to raw genotype and phenotype data. We illustrate the methods on both simulated examples, and to a genome-wide association study of blood lipid traits where we identify 18 potential novel genetic associations that were not identified by univariate analyses of the same data. PMID:23861737

  13. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  14. Children's mathematical performance: five cognitive tasks across five grades.

    PubMed

    Moore, Alex M; Ashcraft, Mark H

    2015-07-01

    Children in elementary school, along with college adults, were tested on a battery of basic mathematical tasks, including digit naming, number comparison, dot enumeration, and simple addition or subtraction. Beyond cataloguing performance to these standard tasks in Grades 1 to 5, we also examined relationships among the tasks, including previously reported results on a number line estimation task. Accuracy and latency improved across grades for all tasks, and classic interaction patterns were found, for example, a speed-up of subitizing and counting, increasingly shallow slopes in number comparison, and progressive speeding of responses especially to larger addition and subtraction problems. Surprisingly, digit naming was faster than subitizing at all ages, arguing against a pre-attentive processing explanation for subitizing. Estimation accuracy and speed were strong predictors of children's addition and subtraction performance. Children who gave exponential responses on the number line estimation task were slower at counting in the dot enumeration task and had longer latencies on addition and subtraction problems. The results provided further support for the importance of estimation as an indicator of children's current and future mathematical expertise. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. How to Say How Much: Amounts and Stoichiometry

    NASA Astrophysics Data System (ADS)

    Ault, Addison

    2001-10-01

    This paper presents a concise and consistent pictorial representation of the ways by which chemists describe an amount of material and of the conversion factors by which these statements of amount can be translated into one another. The expressions of amounts are mole, grams, milliliters of a pure liquid, liters of solution, liters of a gas at standard and nonstandard conditions, and number of particles. The paper then presents a visual representation or "map" for the solution of the typical stoichiometry problems discussed in general chemistry. You use the map for mole-to-mole and gram-to-gram calculations (or any combination of these), and for limiting reagent and percent yield problems. You can extend the method to reactions that involve solutions or gases and to titration problems. All stoichiometry problems are presented as variations on a central theme, and all problems are reduced to the same types of elementary steps.

  16. On a modification method of Lefschetz thimbles

    NASA Astrophysics Data System (ADS)

    Tsutsui, Shoichiro; Doi, Takahiro M.

    2018-03-01

    The QCD at finite density is not well understood yet, where standard Monte Carlo simulation suffers from the sign problem. In order to overcome the sign problem, the method of Lefschetz thimble has been explored. Basically, the original sign problem can be less severe in a complexified theory due to the constancy of the imaginary part of an action on each thimble. However, global phase factors assigned on each thimble still remain. Their interference is not negligible in a situation where a large number of thimbles contribute to the partition function, and this could also lead to a sign problem. In this study, we propose a method to resolve this problem by modifying the structure of Lefschetz thimbles such that only a single thimble is relevant to the partition function. It can be shown that observables measured in the original and modified theories are connected by a simple identity. We exemplify that our method works well in a toy model.

  17. Impact of Early Intervention on Psychopathology, Crime, and Weil-Being at Age 25

    PubMed Central

    2015-01-01

    Objective This randomized controlled trial tested the efficacy of early intervention to prevent adult psychopathology and improve well-being in early-starting conduct-problem children. Method Kindergarteners (N=9,594) in three cohorts (1991–1993) at 55 schools in four communities were screened for conduct problems, yielding 979 early starters. A total of 891 (91%) consented (51% African American, 47% European American; 69% boys). Children were randomly assigned by school cluster to a 10-year intervention or control. The intervention goal was to develop social competencies in children that would carry them throughout life, through social skills training, parent behavior-management training with home visiting, peer coaching, reading tutoring, and classroom social-emotional curricula. Manualization and supervision ensured program fidelity. Ninety-eight percent participated during grade 1, and 80% continued through grade 10. At age 25, arrest records were reviewed (N=817,92%), and condition-blinded adults psychiatrically interviewed participants (N=702; 81% of living participants) and a peer (N=535) knowledgeable about the participant. Results Intent-to-treat logistic regression analyses indicated that 69% of participants in the control arm displayed at least one externalizing, internalizing, or substance abuse psychiatric problem (based on self- or peer interview) at age 25, in contrast with 59% of those assigned to intervention (odds ratio=0.59, CI=0.43–0.81; number needed to treat=8). This pattern also held for self-interviews, peer interviews, scores using an “and” rule for self- and peer reports, and separate tests for externalizing problems, internalizing problems, and substance abuse problems, as well as for each of three cohorts, four sites, male participants, female participants, African Americans, European Americans, moderate-risk, and high-risk subgroups. Intervention participants also received lower severity-weighted violent (standardized estimate=-0.37) and drug (standardized estimate=-0.43) crime conviction scores, lower risky sexual behavior scores (standardized estimate=-0.24), and higher well-being scores (standardized estimate=0.19). Conclusions This study provides evidence for the efficacy of early intervention in preventing adult psychopathology among high-risk early-starting conduct-problem children. PMID:25219348

  18. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data [plus Supplementary Electronic Annex 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolery, Thomas J.; Jove Colon, Carlos F.

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.« less

  19. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data

    NASA Astrophysics Data System (ADS)

    Wolery, Thomas J.; Jové Colón, Carlos F.

    2017-09-01

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data. Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO2, water, and aqueous species such as Na+ and Cl-. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15 K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of "links" to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare "key" or "reference" datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.

  20. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data [plus Supplementary Electronic Annex 2

    DOE PAGES

    Wolery, Thomas J.; Jove Colon, Carlos F.

    2016-09-26

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.« less

  1. Harmonisation of microbial sampling and testing methods for distillate fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, G.C.; Hill, E.C.

    1995-05-01

    Increased incidence of microbial infection in distillate fuels has led to a demand for organisations such as the Institute of Petroleum to propose standards for microbiological quality, based on numbers of viable microbial colony forming units. Variations in quality requirements, and in the spoilage significance of contaminating microbes plus a tendency for temporal and spatial changes in the distribution of microbes, makes such standards difficult to implement. The problem is compounded by a diversity in the procedures employed for sampling and testing for microbial contamination and in the interpretation of the data obtained. The following paper reviews these problems andmore » describes the efforts of The Institute of Petroleum Microbiology Fuels Group to address these issues and in particular to bring about harmonisation of sampling and testing methods. The benefits and drawbacks of available test methods, both laboratory based and on-site, are discussed.« less

  2. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  3. Numerical method for solution of systems of non-stationary spatially one-dimensional nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Morozov, S. K.; Krasitskiy, O. P.

    1978-01-01

    A computational scheme and a standard program is proposed for solving systems of nonstationary spatially one-dimensional nonlinear differential equations using Newton's method. The proposed scheme is universal in its applicability and its reduces to a minimum the work of programming. The program is written in the FORTRAN language and can be used without change on electronic computers of type YeS and BESM-6. The standard program described permits the identification of nonstationary (or stationary) solutions to systems of spatially one-dimensional nonlinear (or linear) partial differential equations. The proposed method may be used to solve a series of geophysical problems which take chemical reactions, diffusion, and heat conductivity into account, to evaluate nonstationary thermal fields in two-dimensional structures when in one of the geometrical directions it can take a small number of discrete levels, and to solve problems in nonstationary gas dynamics.

  4. Standards for contamination control

    NASA Astrophysics Data System (ADS)

    Borson, Eugene N.

    2004-10-01

    Standards are an important component of national and international trade. We depend upon standards to assure that manufactured parts will work together, wherever they are made, and that we speak the same technical language, no matter what language we speak. Understanding is important in order to know when to take exceptions to or tailor the standard to fit the job. Standards that are used in contamination control have increased in numbers over the years as more industries have had to improve their manufacturing processes to enhance reliability or yields of products. Some older standards have been revised to include new technologies, and many new standards have been developed. Some of the new standards were written for specific industries while others apply to many industries. Many government standards have been replaced with standards from nongovernmental standards organizations. This trend has been encouraged by U.S. law that requires the government to use commercial standards where possible. This paper reviews some of the more important standards for the aerospace industry, such as IEST-STD-CC1246 and ISO 14644-1, that have been published in recent years. Benefits, usage, and problems with some standards will be discussed. Some standards are referenced, and websites of some standards organizations are listed.

  5. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. A low emission vehicle procurement approach for Washington state

    NASA Astrophysics Data System (ADS)

    McCoy, G. A.; Lyons, J. K.; Ware, G.

    1992-06-01

    The Clean Air Washington Act of 1991 directs the Department of Ecology to establish a clean-fuel vehicle standard. The Department of General Administration shall purchase vehicles based on this standard beginning in the Fall of 1992. The following summarizes the major issues effecting vehicle emissions and their regulation, and present a methodology for procuring clean-fuel vehicles for the State of Washington. Washington State's air quality problems are much less severe than in other parts of the country such as California, the East Coast and parts of the Mid West. Ozone, which is arguably the dominant air quality problem in the US, is a recent and relatively minor issue in Washington. Carbon monoxide (CO) represents a more immediate problem in Washington, with most of the state's urban areas exceeding national CO air quality standards. Since the mid-1960's, vehicle tailpipe hydrocarbon and carbon monoxide emissions have been reduced by 96 percent relative to precontrol vehicles. Nitrogen oxide emissions have been reduced by 76 percent. Emissions from currently available vehicles are quite low with respect to in-place exhaust emission standards. Cold-start emissions constitute about 75 percent of the total emissions measured with the Federal Test Procedure used to certify motor vehicles. There is no currently available 'inherently clean burning fuel'. In 1991, 3052 vehicles were purchased under Washington State contract. Provided that the same number are acquired in 1993, the state will need to purchase 915 vehicles which meet the definition of a 'clean-fueled vehicle'.

  7. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  8. Simulated annealing two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Velis, Danilo R.; Ulrych, Tadeusz J.

    We present a new method for solving the two-point seismic ray tracing problem based on Fermat's principle. The algorithm overcomes some well known difficulties that arise in standard ray shooting and bending methods. Problems related to: (1) the selection of new take-off angles, and (2) local minima in multipathing cases, are overcome by using an efficient simulated annealing (SA) algorithm. At each iteration, the ray is propagated from the source by solving a standard initial value problem. The last portion of the raypath is then forced to pass through the receiver. Using SA, the total traveltime is then globally minimized by obtaining the initial conditions that produce the absolute minimum path. The procedure is suitable for tracing rays through 2D complex structures, although it can be extended to deal with 3D velocity media. Not only direct waves, but also reflected and head-waves can be incorporated in the scheme. One important advantage is its simplicity, in as much as any available or user-preferred initial value solver system can be used. A number of clarifying examples of multipathing in 2D media are examined.

  9. Study on road sign recognition in LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2016-02-01

    Road and traffic sign identification is a field of study that can be used to aid the development of in-car advisory systems. It uses computer vision and artificial intelligence to extract the road signs from outdoor images acquired by a camera in uncontrolled lighting conditions where they may be occluded by other objects, or may suffer from problems such as color fading, disorientation, variations in shape and size, etc. An automatic means of identifying traffic signs, in these conditions, can make a significant contribution to develop an Intelligent Transport Systems (ITS) that continuously monitors the driver, the vehicle, and the road. Road and traffic signs are characterized by a number of features which make them recognizable from the environment. Road signs are located in standard positions and have standard shapes, standard colors, and known pictograms. These characteristics make them suitable for image identification. Traffic sign identification covers two problems: traffic sign detection and traffic sign recognition. Traffic sign detection is meant for the accurate localization of traffic signs in the image space, while traffic sign recognition handles the labeling of such detections into specific traffic sign types or subcategories [1].

  10. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  11. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  12. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  13. The Limiting Effects of Astigmatism on Visual Performance through Periscopes

    DTIC Science & Technology

    1979-10-01

    NAVAL SUBMARINE MEDICAL RESEARCH LABORATORY SUBMARINE BASE, GROTON, CONN. , ; REPORT NUMBER 905 THE LIMITING EFFECTS OF ASTIGMATISM ...distribution unlimited THE LIMITING EFFECTS OF ASTIGMATISM ON VISUAL PERFORMANCE THROUGH PERISCOPES by S. M. Luria, J. A. S. Kinney, C. L. Schlichting...release; distribution unlimited PROBLEM To determine if the new peri: Navy standards for astigmatic peris« FINDINGS copes make it possible to relax

  14. Real-Time Ada Problem Solution Study

    DTIC Science & Technology

    1989-03-24

    been performed, there is a larger base of information concerning standards and guidelines for Ada usage, as well "lessons learned ". A number of...the target machine and operate in conjunction with the application programs, they also require system resources (CPU,memory). The utilization of...Transporter-Consumer 1694 154 6. Producer-Transpt-Buffer- Transp -Consumer 2248 204 7. Relay 906 82 8. Conditional Entry - no rendezvous 170 15

  15. Beyond Nomothetic Classification of Behavioural Difficulties: Using Valued Outcomes Analysis to Deal with the Behaviour Problems that Occur in the Classroom

    ERIC Educational Resources Information Center

    Bitsika, Vicki

    2005-01-01

    The number of students who are labeled as having some form of behavioural disorder which requires specialized assistance in the regular school setting is growing. Current approaches to working with these students are often based on the standardized application of treatments designed to modify general symptoms rather than specific behaviours. It is…

  16. Guidelines for the Design of Computers and Information Processing Systems to Increase Their Access by Persons with Disabilities. Version 2.0.

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg C.; Lee, Charles C.

    Many low-cost and no-cost modifications to computers would greatly increase the number of disabled individuals who could use standard computers without requiring custom modifications, and would increase the ability to attach special input and output systems. The purpose of the Guidelines is to provide an awareness of these access problems and a…

  17. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  18. Productivity improvement using industrial engineering tools

    NASA Astrophysics Data System (ADS)

    Salaam, H. A.; How, S. B.; Faisae, M. F.

    2012-09-01

    Minimizing the number of defects is important to any company since it influence their outputs and profits. The aim of this paper is to study the implementation of industrial engineering tools in a manufacturing recycle paper box company. This study starts with reading the standard operation procedures and analyzing the process flow to get the whole idea on how to manufacture paper box. At the same time, observations at the production line were made to identify problem occurs in the production line. By using check sheet, the defect data from each station were collected and have been analyzed using Pareto Chart. From the chart, it is found that glue workstation shows the highest number of defects. Based on observation at the glue workstation, the existing method used to glue the box was inappropriate because the operator used a lot of glue. Then, by using cause and effect diagram, the root cause of the problem was identified and solutions to overcome the problem were proposed. There are three suggestions proposed to overcome this problem. Cost reduction for each solution was calculated and the best solution is using three hair drier to dry the sticky glue which produce only 6.4 defects in an hour with cost of RM 0.0224.

  19. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  20. The MV model of the color glass condensate for a finite number of sources including Coulomb interactions

    DOE PAGES

    McLerran, Larry; Skokov, Vladimir V.

    2016-09-19

    We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.

  1. Beyond the Standard Model: The pragmatic approach to the gauge hierarchy problem

    NASA Astrophysics Data System (ADS)

    Mahbubani, Rakhi

    The current favorite solution to the gauge hierarchy problem, the Minimal Supersymmetric Standard Model (MSSM), is looking increasingly fine tuned as recent results from LEP-II have pushed it to regions of its parameter space where a light higgs seems unnatural. Given this fact it seems sensible to explore other approaches to this problem; we study three alternatives here. The first is a Little Higgs theory, in which the Higgs particle is realized as the pseudo-Goldstone boson of an approximate global chiral symmetry and so is naturally light. We analyze precision electroweak observables in the Minimal Moose model, one example of such a theory, and look for regions in its parameter space that are consistent with current limits on these. It is also possible to find a solution within a supersymmetric framework by adding to the MSSM superpotential a lambdaSHuH d term and UV completing with new strong dynamics under which S is a composite before lambda becomes non-perturbative. This allows us to increase the MSSM tree level higgs mass bound to a value that alleviates the supersymmetric fine-tuning problem with elementary higgs fields, maintaining gauge coupling unification in a natural way. Finally we try an entirely different tack, in which we do not attempt to solve the hierarchy problem, but rather assume that the tuning of the higgs can be explained in some unnatural way, from environmental considerations for instance. With this philosophy in mind we study in detail the low-energy phenomenology of the minimal extension to the Standard Model with a dark matter candidate and gauge coupling unification, consisting of additional fermions with the quantum numbers of SUSY higgsinos, and a singlet.

  2. Associations of Maternal and Infant Testosterone and Cortisol Levels With Maternal Depressive Symptoms and Infant Socioemotional Problems

    PubMed Central

    Cho, June; Su, Xiaogang; Phillips, Vivien; Holditch-Davis, Diane

    2015-01-01

    This study examined the associations of testosterone and cortisol levels with maternal depressive symptoms and infant socioemotional (SE) problems that are influenced by infant gender. A total of 62 mothers and their very-low-birth weight (VLBW) infants were recruited from a neonatal intensive care unit at a tertiary medical center in the southeast United States. Data were collected at three time points (before 40 weeks’ postmenstrual age [PMA] and at 3 months and 6 months of age corrected for prematurity). Measures included infant medical record review, maternal interview, biochemical assays of salivary hormone levels in mother-VLBWinfant pairs, and standard questionnaires. Generalized estimating equations with separate analyses for boys and girls showed that maternal testosterone level was negatively associated with depressive symptoms in mothers of boys, whereas infant testosterone level was negatively associated with maternal report of infant SE problems in girls after controlling for characteristics of mothers and infants and number of days post birth of saliva collection. Not surprisingly, the SE problems were positively associated with a number of medical complications. Mothers with more depressive symptoms reported that their infants had more SE problems. Mothers with higher testosterone levels reported that girls, but not boys, had fewer SE problems. In summary, high levels of testosterone could have a protective role for maternal depressive symptoms and infant SE problems. Future research need to be directed toward clinical application of these preliminary results. PMID:25954021

  3. How do gamblers end gambling: longitudinal analysis of Internet gambling behaviors prior to account closure due to gambling related problems.

    PubMed

    Xuan, Ziming; Shaffer, Howard

    2009-06-01

    To examine behavioral patterns of actual Internet gamblers who experienced gambling-related problems and voluntarily closed their accounts. A nested case-control design was used to compare gamblers who closed their accounts because of gambling problems to those who maintained open accounts. Actual play patterns of in vivo Internet gamblers who subscribed to an Internet gambling site. 226 gamblers who closed accounts due to gambling problems were selected from a cohort of 47,603 Internet gamblers who subscribed to an Internet gambling site during February 2005; 226 matched-case controls were selected from the group of gamblers who did not close their accounts. Daily aggregates of behavioral data were collected during an 18-month study period. Main outcomes of interest were daily aggregates of stake, odds, and net loss, which were standardized by the daily aggregate number of bets. We also examined the number of bets to measure trajectory of gambling frequency. Account closers due to gambling problems experienced increasing monetary loss as the time to closure approached; they also increased their stake per bet. Yet they did not chase longer odds; their choices of wagers were more probabilistically conservative (i.e., short odds) compared with the controls. The changes of monetary involvement and risk preference occurred concurrently during the last few days prior to voluntary closing. Our finding of an involvement-seeking yet risk-averse tendency among self-identified problem gamblers challenges the notion that problem gamblers seek "long odds" during "chasing."

  4. Associations of Maternal and Infant Testosterone and Cortisol Levels With Maternal Depressive Symptoms and Infant Socioemotional Problems.

    PubMed

    Cho, June; Su, Xiaogang; Phillips, Vivien; Holditch-Davis, Diane

    2016-01-01

    This study examined the associations of testosterone and cortisol levels with maternal depressive symptoms and infant socioemotional (SE) problems that are influenced by infant gender. A total of 62 mothers and their very-low-birth weight (VLBW) infants were recruited from a neonatal intensive care unit at a tertiary medical center in the southeast United States. Data were collected at three time points (before 40 weeks' postmenstrual age [PMA] and at 3 months and 6 months of age corrected for prematurity). Measures included infant medical record review, maternal interview, biochemical assays of salivary hormone levels in mother-VLBWinfant pairs, and standard questionnaires. Generalized estimating equations with separate analyses for boys and girls showed that maternal testosterone level was negatively associated with depressive symptoms in mothers of boys, whereas infant testosterone level was negatively associated with maternal report of infant SE problems in girls after controlling for characteristics of mothers and infants and number of days post birth of saliva collection. Not surprisingly, the SE problems were positively associated with a number of medical complications. Mothers with more depressive symptoms reported that their infants had more SE problems. Mothers with higher testosterone levels reported that girls, but not boys, had fewer SE problems. In summary, high levels of testosterone could have a protective role for maternal depressive symptoms and infant SE problems. Future research need to be directed toward clinical application of these preliminary results. © The Author(s) 2015.

  5. A novel neural network for the synthesis of antennas and microwave devices.

    PubMed

    Delgado, Heriberto Jose; Thursby, Michael H; Ham, Fredric M

    2005-11-01

    A novel artificial neural network (SYNTHESIS-ANN) is presented, which has been designed for computationally intensive problems and applied to the optimization of antennas and microwave devices. The antenna example presented is optimized with respect to voltage standing-wave ratio, bandwidth, and frequency of operation. A simple microstrip transmission line problem is used to further describe the ANN effectiveness, in which microstrip line width is optimized with respect to line impedance. The ANNs exploit a unique number representation of input and output data in conjunction with a more standard neural network architecture. An ANN consisting of a heteroassociative memory provided a very efficient method of computing necessary geometrical values for the antenna when used in conjunction with a new randomization process. The number representation used provides significant insight into this new method of fault-tolerant computing. Further work is needed to evaluate the potential of this new paradigm.

  6. Spin Number Coherent States and the Problem of Two Coupled Oscillators

    NASA Astrophysics Data System (ADS)

    Ojeda-Guillén, D.; Mota, R. D.; Granados, V. D.

    2015-07-01

    From the definition of the standard Perelomov coherent states we introduce the Perelomov number coherent states for any su(2) Lie algebra. With the displacement operator we apply a similarity transformation to the su(2) generators and construct a new set of operators which also close the su(2) Lie algebra, being the Perelomov number coherent states the new basis for its unitary irreducible representation. We apply our results to obtain the energy spectrum, the eigenstates and the partition function of two coupled oscillators. We show that the eigenstates of two coupled oscillators are the SU(2) Perelomov number coherent states of the two-dimensional harmonic oscillator with an appropriate choice of the coherent state parameters. Supported by SNI-México, COFAA-IPN, EDD-IPN, EDI-IPN, SIP-IPN Project No. 20150935

  7. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  8. Performance evaluation of OpenFOAM on many-core architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz

    In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.

  9. A nonperturbative light-front coupled-cluster method

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2012-10-01

    The nonperturbative Hamiltonian eigenvalue problem for bound states of a quantum field theory is formulated in terms of Dirac's light-front coordinates and then approximated by the exponential-operator technique of the many-body coupled-cluster method. This approximation eliminates any need for the usual approximation of Fock-space truncation. Instead, the exponentiated operator is truncated, and the terms retained are determined by a set of nonlinear integral equations. These equations are solved simultaneously with an effective eigenvalue problem in the valence sector, where the number of constituents is small. Matrix elements can be calculated, with extensions of techniques from standard coupled-cluster theory, to obtain form factors and other observables.

  10. Fire technology abstracts, volume 4. Cumulative indexes

    NASA Astrophysics Data System (ADS)

    1982-03-01

    Cumulative subject, author, publisher, and report number indexes referencing articles, books, reports, and patents are provided. The dynamics of fire, behavior and properties of materials, fire modeling and test burns, fire protection, fire safety, fire service organization, apparatus and equipment, fire prevention suppression, planning, human behavior, medical problems, codes and standards, hazard identification, safe handling of materials, and insurance economics of loss and prevention are among the subjects covered.

  11. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  12. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  13. A Standard-Based and Context-Aware Architecture for Personal Healthcare Smart Gateways.

    PubMed

    Santos, Danilo F S; Gorgônio, Kyller C; Perkusich, Angelo; Almeida, Hyggo O

    2016-10-01

    The rising availability of Personal Health Devices (PHDs) capable of Personal Network Area (PAN) communication and the desire of keeping a high quality of life are the ingredients of the Connected Health vision. In parallel, a growing number of personal and portable devices, like smartphones and tablet computers, are becoming capable of taking the role of health gateway, that is, a data collector for the sensor PHDs. However, as the number of PHDs increase, the number of other peripherals connected in PAN also increases. Therefore, PHDs are now competing for medium access with other devices, decreasing the Quality of Service (QoS) of health applications in the PAN. In this article we present a reference architecture to prioritize PHD connections based on their state and requirements, creating a healthcare Smart Gateway. Healthcare context information is extracted by observing the traffic through the gateway. A standard-based approach was used to identify health traffic based on ISO/IEEE 11073 family of standards. A reference implementation was developed showing the relevance of the problem and how the proposed architecture can assist in the prioritization. The reference Smart Gateway solution was integrated with a Connected Health System for the Internet of Things, validating its use in a real case scenario.

  14. Regularized Moment Equations and Shock Waves for Rarefied Granular Gas

    NASA Astrophysics Data System (ADS)

    Reddy, Lakshminarayana; Alam, Meheboob

    2016-11-01

    It is well-known that the shock structures predicted by extended hydrodynamic models are more accurate than the standard Navier-Stokes model in the rarefied regime, but they fail to predict continuous shock structures when the Mach number exceeds a critical value. Regularization or parabolization is one method to obtain smooth shock profiles at all Mach numbers. Following a Chapman-Enskog-like method, we have derived the "regularized" version 10-moment equations ("R10" moment equations) for inelastic hard-spheres. In order to show the advantage of R10 moment equations over standard 10-moment equations, the R10 moment equations have been employed to solve the Riemann problem of plane shock waves for both molecular and granular gases. The numerical results are compared between the 10-moment and R10-moment models and it is found that the 10-moment model fails to produce continuous shock structures beyond an upstream Mach number of 1 . 34 , while the R10-moment model predicts smooth shock profiles beyond the upstream Mach number of 1 . 34 . The density and granular temperature profiles are found to be asymmetric, with their maxima occurring within the shock-layer.

  15. Learning from FITS: Limitations in use in modern astronomical research

    NASA Astrophysics Data System (ADS)

    Thomas, B.; Jenness, T.; Economou, F.; Greenfield, P.; Hirst, P.; Berry, D. S.; Bray, E.; Gray, N.; Muna, D.; Turner, J.; de Val-Borro, M.; Santander-Vela, J.; Shupe, D.; Good, J.; Berriman, G. B.; Kitaeff, S.; Fay, J.; Laurino, O.; Alexov, A.; Landry, W.; Masters, J.; Brazier, A.; Schaaf, R.; Edwards, K.; Redman, R. O.; Marsh, T. R.; Streicher, O.; Norris, P.; Pascual, S.; Davie, M.; Droettboom, M.; Robitaille, T.; Campana, R.; Hagen, A.; Hartogh, P.; Klaes, D.; Craig, M. W.; Homeier, D.

    2015-09-01

    The Flexible Image Transport System (FITS) standard has been a great boon to astronomy, allowing observatories, scientists and the public to exchange astronomical information easily. The FITS standard, however, is showing its age. Developed in the late 1970s, the FITS authors made a number of implementation choices that, while common at the time, are now seen to limit its utility with modern data. The authors of the FITS standard could not anticipate the challenges which we are facing today in astronomical computing. Difficulties we now face include, but are not limited to, addressing the need to handle an expanded range of specialized data product types (data models), being more conducive to the networked exchange and storage of data, handling very large datasets, and capturing significantly more complex metadata and data relationships. There are members of the community today who find some or all of these limitations unworkable, and have decided to move ahead with storing data in other formats. If this fragmentation continues, we risk abandoning the advantages of broad interoperability, and ready archivability, that the FITS format provides for astronomy. In this paper we detail some selected important problems which exist within the FITS standard today. These problems may provide insight into deeper underlying issues which reside in the format and we provide a discussion of some lessons learned. It is not our intention here to prescribe specific remedies to these issues; rather, it is to call attention of the FITS and greater astronomical computing communities to these problems in the hope that it will spur action to address them.

  16. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  17. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  18. JPL Test Effectiveness Analysis

    NASA Technical Reports Server (NTRS)

    Shreck, Stephanie; Sharratt, Stephen; Smith, Joseph F.; Strong, Edward

    2008-01-01

    1) The pilot study provided meaningful conclusions that are generally consistent with the earlier Test Effectiveness work done between 1992 and 1994: a) Analysis of pre-launch problem/failure reports is consistent with earlier work. b) Analysis of post-launch early mission anomaly reports indicates that there are more software issues in newer missions, and the no-test category for identification of post-launch failures is more significant than in the earlier analysis. 2) Future work includes understanding how differences in Missions effect these analyses: a) There are large variations in the number of problem reports and issues that are documented by the different Projects/Missions. b) Some missions do not have any reported environmental test anomalies, even though environmental tests were performed. 3) Each project/mission has different standards and conventions for filling out the PFR forms, the industry may wish to address this issue: a) Existing problem reporting forms are to document and track problems, failures, and issues (etc.) for the projects, to ensure high quality. b) Existing problem reporting forms are not intended for data mining.

  19. Two-level optimization of composite wing structures based on panel genetic optimization

    NASA Astrophysics Data System (ADS)

    Liu, Boyang

    The design of complex composite structures used in aerospace or automotive vehicles presents a major challenge in terms of computational cost. Discrete choices for ply thicknesses and ply angles leads to a combinatorial optimization problem that is too expensive to solve with presently available computational resources. We developed the following methodology for handling this problem for wing structural design: we used a two-level optimization approach with response-surface approximations to optimize panel failure loads for the upper-level wing optimization. We tailored efficient permutation genetic algorithms to the panel stacking sequence design on the lower level. We also developed approach for improving continuity of ply stacking sequences among adjacent panels. The decomposition approach led to a lower-level optimization of stacking sequence with a given number of plies in each orientation. An efficient permutation genetic algorithm (GA) was developed for handling this problem. We demonstrated through examples that the permutation GAs are more efficient for stacking sequence optimization than a standard GA. Repair strategies for standard GA and the permutation GAs for dealing with constraints were also developed. The repair strategies can significantly reduce computation costs for both standard GA and permutation GA. A two-level optimization procedure for composite wing design subject to strength and buckling constraints is presented. At wing-level design, continuous optimization of ply thicknesses with orientations of 0°, 90°, and +/-45° is performed to minimize weight. At the panel level, the number of plies of each orientation (rounded to integers) and inplane loads are specified, and a permutation genetic algorithm is used to optimize the stacking sequence. The process begins with many panel genetic optimizations for a range of loads and numbers of plies of each orientation. Next, a cubic polynomial response surface is fitted to the optimum buckling load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.

  20. Development of Finite Elements for Two-Dimensional Structural Analysis Using the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method has been developed in recent years for the analysis of structural mechanics problems. This method treats all independent internal forces as unknown variables that can be calculated by simultaneously imposing equations of equilibrium and compatibility conditions. In this paper a finite element library for analyzing two-dimensional problems by the Integrated Force Method is presented. Triangular- and quadrilateral-shaped elements capable of modeling arbitrary domain configurations are presented. The element equilibrium and flexibility matrices are derived by discretizing the expressions for potential and complementary energies, respectively. The displacement and stress fields within the finite elements are independently approximated. The displacement field is interpolated as it is in the standard displacement method, and the stress field is approximated by using complete polynomials of the correct order. A procedure that uses the definitions of stress components in terms of an Airy stress function is developed to derive the stress interpolation polynomials. Such derived stress fields identically satisfy the equations of equilibrium. Moreover, the resulting element matrices are insensitive to the orientation of local coordinate systems. A method is devised to calculate the number of rigid body modes, and the present elements are shown to be free of spurious zero-energy modes. A number of example problems are solved by using the present library, and the results are compared with corresponding analytical solutions and with results from the standard displacement finite element method. The Integrated Force Method not only gives results that agree well with analytical and displacement method results but also outperforms the displacement method in stress calculations.

  1. Organizing Community-Based Data Standards: Lessons from Developing a Successful Open Standard in Systems Biology

    NASA Astrophysics Data System (ADS)

    Hucka, M.

    2015-09-01

    In common with many fields, including astronomy, a vast number of software tools for computational modeling and simulation are available today in systems biology. This wealth of resources is a boon to researchers, but it also presents interoperability problems. Despite working with different software tools, researchers want to disseminate their work widely as well as reuse and extend the models of other researchers. This situation led in the year 2000 to an effort to create a tool-independent, machine-readable file format for representing models: SBML, the Systems Biology Markup Language. SBML has since become the de facto standard for its purpose. Its success and general approach has inspired and influenced other community-oriented standardization efforts in systems biology. Open standards are essential for the progress of science in all fields, but it is often difficult for academic researchers to organize successful community-based standards. I draw on personal experiences from the development of SBML and summarize some of the lessons learned, in the hope that this may be useful to other groups seeking to develop open standards in a community-oriented fashion.

  2. Problems of standardizing and technical regulation in the electric power industry

    NASA Astrophysics Data System (ADS)

    Grabchak, E. P.

    2016-12-01

    A mandatory condition to ensure normal operation of a power system and efficiency in the sector is standardization and legal regulation of technological activities of electric power engineering entities and consumers. Compared to the times of USSR, the present-time technical guidance documents are not mandatory to follow in most cases, being of an advisory nature due to the lack of new ones. During the last five years, the industry has been showing a deterioration of the situation in terms of ensuring reliability and engineering controllability as a result of the dominant impact of short-term market stimuli and the differences in basic technological policies. In absence of clear requirements regarding the engineering aspects of such activities, production operation does not contribute to the preserving of technical integrity of the Russian power system, which leads to the loss of performance capability and controllability and causes disturbances in the power supply to consumers. The result of this problem is a high rate of accident incidence. The dynamics of accidents by the type of equipment is given, indicating a persisting trend of growth in the number of accidents, which are of a systematic nature. Several problematic aspects of engineering activities of electric power engineering entities, requiring standardization and legal regulation are pointed out: in the domestic power system, a large number of power electrotechnical and generating equipment operate along with systems of regulation, which do not comply with the principles and technical rules representing a framework where the Energy System of Russia is built and functioning

  3. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  4. Assessment of cleaning and disinfection in Salmonella-contaminated poultry layer houses using qualitative and semi-quantitative culture techniques.

    PubMed

    Wales, Andrew; Breslin, Mark; Davies, Robert

    2006-09-10

    Salmonella infection of laying flocks in the UK is predominantly a problem of the persistent contamination of layer houses and associated wildlife vectors by Salmonella Enteritidis. Methods for its control and elimination include effective cleaning and disinfection of layer houses between flocks, and it is important to be able to measure the success of such decontamination. A method for the environmental detection and semi-quantitative enumeration of salmonellae was used and compared with a standard qualitative method, in 12 Salmonella-contaminated caged layer houses before and after cleaning and disinfection. The quantitative technique proved to have comparable sensitivity to the standard method, and additionally provided insights into the numerical Salmonella challenge that replacement flocks would encounter. Elimination of S. Enteritidis was not achieved in any of the premises examined although substantial reductions in the prevalence and numbers of salmonellae were demonstrated, whilst in others an increase in contamination was observed after cleaning and disinfection. Particular problems with feeders and wildlife vectors were highlighted. The use of a quantitative method assisted the identification of problem areas, such as those with a high initial bacterial load or those experiencing only a modest reduction in bacterial count following decontamination.

  5. The University and the Municipality: Summary of Proceedings of the First Session of the National Association of Municipal Universities. Bulletin, 1915, No. 38. Whole Number 665

    ERIC Educational Resources Information Center

    United States Bureau of Education, Department of the Interior, 1915

    1915-01-01

    The problems of industry, government, and life in the modern industrial and commercial city are numerous, large, and complex. For their solution a larger amount of scientific knowledge and higher standards of intelligence among citizens are needed. All the city's agencies for good and progress need to be united and vitalized for more effective…

  6. Helium synthesis, neutrino flavors, and cosmological implications

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.

    1979-01-01

    The problem of the production of helium in big bang cosmology is re-examined in the light of several recent astrophysical observations. These data, and theoretical particle physics considerations, lead to some important inconsistencies in the standard big bang model and suggest that a more complicated picture is needed. Thus, recent constraints on the number of neutrino flavors, as well as constraints on the mean density (openness) of the universe, need not be valid.

  7. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  8. Building information modelling review with potential applications in tunnel engineering of China.

    PubMed

    Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin

    2017-08-01

    Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.

  9. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  10. Building information modelling review with potential applications in tunnel engineering of China

    PubMed Central

    Zhou, Weihong; Qin, Haiyang; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin

    2017-01-01

    Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance. PMID:28878970

  11. Building information modelling review with potential applications in tunnel engineering of China

    NASA Astrophysics Data System (ADS)

    Zhou, Weihong; Qin, Haiyang; Qiu, Junling; Fan, Haobo; Lai, Jinxing; Wang, Ke; Wang, Lixin

    2017-08-01

    Building information modelling (BIM) can be applied to tunnel engineering to address a number of problems, including complex structure, extensive design, long construction cycle and increased security risks. To promote the development of tunnel engineering in China, this paper combines actual cases, including the Xingu mountain tunnel and the Shigu Mountain tunnel, to systematically analyse BIM applications in tunnel engineering in China. The results indicate that BIM technology in tunnel engineering is currently mainly applied during the design stage rather than during construction and operation stages. The application of BIM technology in tunnel engineering covers many problems, such as a lack of standards, incompatibility of different software, disorganized management, complex combination with GIS (Geographic Information System), low utilization rate and poor awareness. In this study, through summary of related research results and engineering cases, suggestions are introduced and an outlook for the BIM application in tunnel engineering in China is presented, which provides guidance for design optimization, construction standards and later operation maintenance.

  12. Truly random number generation: an example

    NASA Astrophysics Data System (ADS)

    Frauchiger, Daniela; Renner, Renato

    2013-10-01

    Randomness is crucial for a variety of applications, ranging from gambling to computer simulations, and from cryptography to statistics. However, many of the currently used methods for generating randomness do not meet the criteria that are necessary for these applications to work properly and safely. A common problem is that a sequence of numbers may look random but nevertheless not be truly random. In fact, the sequence may pass all standard statistical tests and yet be perfectly predictable. This renders it useless for many applications. For example, in cryptography, the predictability of a "andomly" chosen password is obviously undesirable. Here, we review a recently developed approach to generating true | and hence unpredictable | randomness.

  13. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  14. Curvilinear grids for WENO methods in astrophysical simulations

    NASA Astrophysics Data System (ADS)

    Grimm-Strele, H.; Kupka, F.; Muthsam, H. J.

    2014-03-01

    We investigate the applicability of curvilinear grids in the context of astrophysical simulations and WENO schemes. With the non-smooth mapping functions from Calhoun et al. (2008), we can tackle many astrophysical problems which were out of scope with the standard grids in numerical astrophysics. We describe the difficulties occurring when implementing curvilinear coordinates into our WENO code, and how we overcome them. We illustrate the theoretical results with numerical data. The WENO finite difference scheme works only for high Mach number flows and smooth mapping functions, whereas the finite volume scheme gives accurate results even for low Mach number flows and on non-smooth grids.

  15. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  16. Consensus properties and their large-scale applications for the gene duplication problem.

    PubMed

    Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver

    2016-06-01

    Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.

  17. UTIS as one example of standardization of subsea intervention systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, F.G.

    1995-12-31

    The number of diverless subsea interventions has increased dramatically during the last few years. A number of types of tools and equipment have been designed and used. A typical procedure has been to develop new intervention tools under each new contract based on experience from the previous project. This is not at all optimal with regard to project cost and risk, and is no longer acceptable as the oil industry now calls for cost savings within all areas of field development. One answer to the problem will be to develop universal intervention systems with the capability to perform a rangemore » of related tasks, with only minor, planned modifications of the system. This philosophy will dramatically reduce planning, engineering, construction and interface work related to the intervention operation as the main work will be only to locate a standardized landing facility on the subsea structure. The operating procedures can be taken ``off the shelf``. To adapt to this philosophy within the tie-in area, KOS decided to standardize on a Universal Tie-In System (UTIS), which will be included in a Tool Pool for rental world-wide. This paper describes UTIS as a typical example of standardization of subsea intervention systems. 16 figs., 1 tab.« less

  18. Optimization of heterogeneous Bin packing using adaptive genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sridhar, R.; Chandrasekaran, M.; Sriramya, C.; Page, Tom

    2017-03-01

    This research is concentrates on a very interesting work, the bin packing using hybrid genetic approach. The optimal and feasible packing of goods for transportation and distribution to various locations by satisfying the practical constraints are the key points in this project work. As the number of boxes for packing can not be predicted in advance and the boxes may not be of same category always. It also involves many practical constraints that are why the optimal packing makes much importance to the industries. This work presents a combinational of heuristic Genetic Algorithm (HGA) for solving Three Dimensional (3D) Single container arbitrary sized rectangular prismatic bin packing optimization problem by considering most of the practical constraints facing in logistic industries. This goal was achieved in this research by optimizing the empty volume inside the container using genetic approach. Feasible packing pattern was achieved by satisfying various practical constraints like box orientation, stack priority, container stability, weight constraint, overlapping constraint, shipment placement constraint. 3D bin packing problem consists of ‘n’ number of boxes being to be packed in to a container of standard dimension in such a way to maximize the volume utilization and in-turn profit. Furthermore, Boxes to be packed may be of arbitrary sizes. The user input data are the number of bins, its size, shape, weight, and constraints if any along with standard container dimension. This user input were stored in the database and encoded to string (chromosomes) format which were normally acceptable by GA. GA operators were allowed to act over these encoded strings for finding the best solution.

  19. Search optimization of named entities from twitter streams

    NASA Astrophysics Data System (ADS)

    Fazeel, K. Mohammed; Hassan Mottur, Simama; Norman, Jasmine; Mangayarkarasi, R.

    2017-11-01

    With Enormous number of tweets, People often face difficulty to get exact information about those tweets. One of the approach followed for getting information about those tweets via Google. There is not any accuracy tool developed for search optimization and as well as getting information about those tweets. So, this system contains the search optimization and functionalities for getting information about those tweets. Another problem faced here are the tweets that contains grammatical errors, misspellings, non-standard abbreviations, and meaningless capitalization. So, these problems can be eliminated by the use of this tool. Lot of time can be saved and as well as by the use of efficient search optimization each information about those particular tweets can be obtained.

  20. Advanced Solar Cell Testing and Characterization

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila; Curtis, Henry; Piszczor, Michael

    2005-01-01

    The topic for this workshop stems from an ongoing effort by the photovoltaic community and U.S. government to address issues and recent problems associated with solar cells and arrays experienced by a number of different space systems. In April 2003, a workshop session was held at the Aerospace Space Power Workshop to discuss an effort by the Air Force to update and standardize solar cell and array qualification test procedures in an effort to ameliorate some of these problems. The organizers of that workshop session thought it was important to continue these discussions and present this information to the entire photovoltaic community. Thus, it was decided to include this topic as a workshop at the following SPRAT conference.

  1. THE DISCOUNTED REPRODUCTIVE NUMBER FOR EPIDEMIOLOGY

    PubMed Central

    Reluga, Timothy C.; Medlock, Jan; Galvani, Alison

    2013-01-01

    The basic reproductive number, , and the effective reproductive number, , are commonly used in mathematical epidemiology as summary statistics for the size and controllability of epidemics. However, these commonly used reproductive numbers can be misleading when applied to predict pathogen evolution because they do not incorporate the impact of the timing of events in the life-history cycle of the pathogen. To study evolution problems where the host population size is changing, measures like the ultimate proliferation rate must be used. A third measure of reproductive success, which combines properties of both the basic reproductive number and the ultimate proliferation rate, is the discounted reproductive number . The discounted reproductive number is a measure of reproductive success that is an individual’s expected lifetime offspring production discounted by the background population growth rate. Here, we draw attention to the discounted reproductive number by providing an explicit definition and a systematic application framework. We describe how the discounted reproductive number overcomes the limitations of both the standard reproductive numbers and proliferation rates, and show that is closely connected to Fisher’s reproductive values for different life-history stages PMID:19364158

  2. Significant Problems in FITS Limit Its Use in Modern Astronomical Research

    NASA Astrophysics Data System (ADS)

    Thomas, B.; Jenness, T.; Economou, F.; Greenfield, P.; Hirst, P.; Berry, D. S.; Bray, E. M.; Gray, N.; Muna, D.; Turner, J.; de Val-Borro, M.; Santander-Vela, J.; Shupe, D.; Good, J.; Berriman, G. B.

    2014-05-01

    The Flexible Image Transport System (FITS) standard has been a great boon to astronomy, allowing observatories, scientists, and the public to exchange astronomical information easily. The FITS standard is, however, showing its age. Developed in the late 1970s the FITS authors made a number of implementation choices for the format that, while common at the time, are now seen to limit its utility with modern data. The authors of the FITS standard could not appreciate the challenges which we would be facing today in astronomical computing. Difficulties we now face include, but are not limited to, having to address the need to handle an expanded range of specialized data product types (data models), being more conducive to the networked exchange and storage of data, handling very large datasets and the need to capture significantly more complex and data relationships. There are members of the community today who find some (or all) of these limitations unworkable, and have decided to move ahead with storing data in other formats. This reaction should be taken as a wakeup call to the FITS community to make changes in the FITS standard, or to see its usage fall. In this paper we detail some selected important problems which exist within the FITS standard today. It is not our intention to prescribe specific remedies to these issues; rather, we hope to call attention of the FITS and greater astronomical computing communities to these issues in the hopes that it will spur action to address them.

  3. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  4. Relationship between masculinity and feminity in drinking in alcohol-related behavior in a general population sample.

    PubMed

    Lara-Cantú, M A; Medina-Mora, M E; Gutiérrez, C E

    1990-08-01

    The relationship between gender-related personality traits, on one hand and drinking, permissiveness towards drinking, and social as well as personal problems associated to drinking on the other, was studied in a general population sample from the City of Morelia, Mexico. Four gender-related traits scales were used for measuring assertive and aggressive masculinity and affective and submissive feminity, in addition to a standardized questionnaire for assessing drinking and other associated behavior. Some of the main results showed that people with high scores in affective feminity were less willing to allow drinking. Men who adopted a submissive feminine role and women with high masculine aggressive scores were more permissive as regards drinking. Among men, assertive masculine and affective feminine traits were more characteristic among those who drank than among abtainers. Drinking among women was related to liberal attitudes towards drinking and to aggressive masculinity. As regards the number of drinks consumed per month, assertive masculinity and liberal attitudes among men and affective feminity and liberal attitudes among women predicted the number of drinks. Affective feminity was negatively related to drinking. Regarding drinking-associated problems, frequency of drunkenness and submissive feminity among males predicted greater personal and social problems. Among women, drunkenness frequency and number of drinks were the most significant predictors. Contrary to what has been found in other countries, gender was a better drinking predictor than gender-related personality traits.

  5. Dental practice during a world cruise: characterisation of oral health at sea.

    PubMed

    Sobotta, Bernhard A J; John, Mike T; Nitschke, Ina

    2006-01-01

    To describe oral health of passengers and crew attending the dental service aboard during a two months world cruise. In a retrospective, descriptive epidemiologic study design the routine documentation of all dental treatment provided at sea was analysed after the voyage. Subjects were n = 57 passengers (3.5 % of 1619) with a mean age of 71 (+/- 9.8) years and n =56 crew (5.6 % of 999) with a mean age of 37 (+/- 12.0) years. Age, gender, nationality, number of natural teeth and implants were extracted. The prosthetic status was described by recording the number of teeth replaced by fixed prosthesis and number of teeth replaced by removable prosthesis. Oral health-related quality of life (OHRQoL) was measured using the 14-item Oral Health Impact Profile (OHIP-14) and characterised by the OHIP sum score. Women attended for treatment more often than men. Passengers had a mean number of 20 natural teeth plus substantial fixed and removable prosthodontics. Crew had a mean of 26 teeth. British crew and Australian passengers attended the dental service above average. Crew tended to have a higher average OHIP-14 sum score than passengers indicating an increased rate of perceived problems. Emergency patients from both crew and passengers have a higher sum score than patients attending for routine treatment. In passengers the average number of teeth appears to be higher than that of an age matched population of industrialized countries. However, the passengers' socioeconomic status was higher which has an effect on this finding. Socioeconomic factors also serve to explain the high standard of prosthetic care in passengers. Crew in general present with less sophisticated prosthetic devices. This is in line with their different socioeconomic status and origin from developing countries. The level of dental fees aboard in comparison to treatment costs in home countries may explain some of the differences in attendance. Passengers have enjoyed high standards of prosthetic care in the past and will expect a similarly high standard from ship based facilities. The ease of access to quality dental care may explain the relatively low level of perceived problems as characterised by oral health-related quality of life scores. The dental officer aboard has to be prepared to care for very varied diagnostic and treatment needs.

  6. Current status of Kampo medicine curricula in all Japanese medical schools

    PubMed Central

    2012-01-01

    Background There have been a few but not precise surveys of the current status of traditional Japanese Kampo education at medical schools in Japan. Our aim was to identify problems and suggest solutions for a standardized Kampo educational model for all medical schools throughout Japan. Methods We surveyed all 80 medical schools in Japan regarding eight items related to teaching or studying Kampo medicine: (1) the number of class meetings, target school year(s), and type of classes; (2) presence or absence of full-time instructors; (3) curricula contents; (4) textbooks in use; (5) desire for standardized textbooks; (6) faculty development programmes; (7) course contents; and (8) problems to be solved to promote Kampo education. We conducted descriptive analyses without statistics. Results Eighty questionnaires were collected (100%). (1) There were 0 to 25 Kampo class meetings during the 6 years of medical school. At least one Kampo class was conducted at 98% of the schools, ≥4 at 84%, ≥8 at 44%, and ≥16 at 5%. Distribution of classes was 19% and 57% for third- and fourth-year students, respectively. (2) Only 29% of schools employed full-time Kampo medicine instructors. (3) Medicine was taught on the basis of traditional Japanese Kampo medicine by 81% of the schools, Chinese medicine by 19%, and Western medicine by 20%. (4) Textbooks were used by 24%. (5) Seventy-four percent considered using standardized textbooks. (6) Thirty-three percent provided faculty development programmes. (7) Regarding course contents, “characteristics” was selected by 94%, “basic concepts” by 84%, and evidence-based medicine by 64%. (8) Among the problems to be solved promptly, curriculum standardization was selected by 63%, preparation of simple textbooks by 51%, and fostering instructors responsible for Kampo education by 65%. Conclusions Japanese medical schools only offer students a short time to study Kampo medicine, and the impetus to include Kampo medicine in their curricula varies among schools. Future Kampo education at medical schools requires solving several problems, including curriculum standardization. PMID:23122050

  7. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  8. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  9. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  10. Program Manager: The Journal of the Defense Systems Management College. Volume 15, Number 4, July-August 1986.

    DTIC Science & Technology

    1986-08-01

    Architect, troi systems, CAD CAM, and com- functional analysis , synthesis, and National Bureau of Standards, mon engineering data bases will be the trade...Recurrent analysis of a management these s h e m evolutionary chain of data processing problem combining real data and ponents of defense support system...at the Defense first constructed his support simulator Systems Management College, the by assembling appropriate analysis Data Storage and Retrieval

  11. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  12. Problems of Automation and Management Principles Information Flow in Manufacturing

    NASA Astrophysics Data System (ADS)

    Grigoryuk, E. N.; Bulkin, V. V.

    2017-07-01

    Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.

  13. Recent Trends in Computerized Medical Information Systems for Hospital Departments

    PubMed Central

    Maturi, Vincent F.; DuBois, Richard M.

    1980-01-01

    The authors have re-examined the current state of commercially-available department-specific medical information systems and their relationship to the hospital-wide communications systems. The current state was compared to the state two years ago when the authors made their first survey. The changes in the trend, the number of problems that hospital administrators or department directors are faced with when purchasing or using department-specific systems, and the activity in standardization were studied.

  14. Combat Service Support Model Development: BRASS - TRANSLOG - Army 21

    DTIC Science & Technology

    1984-07-01

    throughout’the system. Transitional problems may address specific hardware and related software , such as the Standard Army Ammunition System ( SAAS ...FILE. 00 Cabat Service Support Model Development .,PASS TRANSLOG -- ARMY 21 0 Contract Number DAAK11-84-D-0004 Task Order #1 DRAFT REPOkT July 1984 D...Armament Systems, Inc. 211 West Bel Air Avenue P.O. Box 158 Aberdeen, MD 21001 8 8 8 2 1 S CORMIT SERVICE SUPPORT MODEL DEVELOPMENT BRASS -- TRANSLOG

  15. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  16. The Army’s Green Warriors: Environmental Considerations in Contingency Operations

    DTIC Science & Technology

    2008-01-01

    NUMBER OF PAGES 5 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298...Rev. 8-98) Prescribed by ANSI Std Z39-18 Th e relationship between the Army and the environment is a two-way street. On the one hand , soldiers and...poor sanitation can cause debilitating shorter-term illness and can also sometimes cause longer-term health problems, such as increased cancer

  17. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  18. A trust-based sensor allocation algorithm in cooperative space search problems

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  19. A public health perspective to environmental barriers and accessibility problems for senior citizens living in ordinary housing.

    PubMed

    Granbom, Marianne; Iwarsson, Susanne; Kylberg, Marianne; Pettersson, Cecilia; Slaug, Björn

    2016-08-11

    Housing environments that hinder performance of daily activities and impede participation in social life have negative health consequences particularly for the older segment of the population. From a public health perspective accessible housing that supports active and healthy ageing is therefore crucial. The objective of the present study was to make an inventory of environmental barriers and investigate accessibility problems in the ordinary housing stock in Sweden as related to the functional capacity of senior citizens. Particular attention was paid to differences between housing types and building periods and to identify environmental barriers generating the most accessibility problems for sub-groups of senior citizens. Data on environmental barriers in dwellings from three databases on housing and health in old age was analysed (N = 1021). Four functional profiles representing large groups of senior citizens were used in analyses of the magnitude and severity of potential accessibility problems. Differences in terms of type of housing and building period were examined. High proportions of one-family houses as well as multi-dwellings had substantial numbers of environmental barriers, with significantly lower numbers in later building periods. Accessibility problems occurred already for senior citizens with few functional limitations, but more profound for those dependent on mobility devices. The most problematic housing sections were entrances in one-family houses and kitchens of multi-dwellings. Despite a high housing standard in the Swedish ordinary housing stock the results show substantial accessibility problems for senior citizens with functional limitations. To make housing accessible large-scale and systematic efforts are required.

  20. Recurrence of retinal vein thrombosis with Pycnogenol® or Aspirin® supplementation: a registry study.

    PubMed

    Rodriguez, P; Belcaro, G; Dugall, M; Hu, S; Luzzi, R; Ledda, A; Ippolito, E; Corsi, M; Ricci, A; Feragalli, B; Cornelli, U; Gizzi, C; Hosoi, M

    2015-09-01

    The aim of this study was to use Pycnogenol® to reduce the recurrence of retinal vein thrombosis (RVT) after a first episode. Pycnogenol® is an anti-inflammatory, anti-edema and an antiplatelet agent with a "mild" antithrombotic activity. The registry, using Pycnogenol® was aimed at reducing the number of repeated episodes of RVT. Possible management options--chosen by patients--were: standard management; standard management + oral Aspirin® 100 mg once/day (if there were no tolerability problems before admission); standard management + Pycnogenol® two 50 mg capsules per day (for a total of 100 mg/day). Number of subjects, age, sex, distribution, percentage of smokers, and vision were comparable. Recurrent RVT was seen in 17.39% of controls and in 3.56% of subjects supplemented with Pycnogenol® (P<0.05 vs. controls). There was RVT in 15.38% of the subjects using Aspirin®. The incidence of RVT was 4.88 times higher with standard management in comparison with the supplement group and 4.32 lower with Pycnogenol® supplementation in comparison with Aspirin®. Vision level was better with Pycnogenol® (20/25 at nine months; P<0.05). With Pycnogenol®, edema at the retinal level was also significantly reduced compared to the other groups. Pycnogenol® has a very good safety profile. In the Aspirin® group 26 completed 9 months and 6 subjects dropped out for tolerability problems. In the Aspirin® group, 2 minor, subclinical, retinal, hemorrhagic episodes during the follow-up were observed (2 subjects out of 26, equivalent to 7.69%). This pilot registry indicates that Pycnogenol® seems to reduce the recurrence of RVT without side effects. It does not induce new hemorrhagic episodes that may be theoretically linked to the use of Aspirin® (or other antiplatelets). Larger studies should be planned involving a wider range of conditions, diseases and risk factors associated to RVT and to its recurrence.

  1. An Effective Hybrid Evolutionary Algorithm for Solving the Numerical Optimization Problems

    NASA Astrophysics Data System (ADS)

    Qian, Xiaohong; Wang, Xumei; Su, Yonghong; He, Liu

    2018-04-01

    There are many different algorithms for solving complex optimization problems. Each algorithm has been applied successfully in solving some optimization problems, but not efficiently in other problems. In this paper the Cauchy mutation and the multi-parent hybrid operator are combined to propose a hybrid evolutionary algorithm based on the communication (Mixed Evolutionary Algorithm based on Communication), hereinafter referred to as CMEA. The basic idea of the CMEA algorithm is that the initial population is divided into two subpopulations. Cauchy mutation operators and multiple paternal crossover operators are used to perform two subpopulations parallelly to evolve recursively until the downtime conditions are met. While subpopulation is reorganized, the individual is exchanged together with information. The algorithm flow is given and the performance of the algorithm is compared using a number of standard test functions. Simulation results have shown that this algorithm converges significantly faster than FEP (Fast Evolutionary Programming) algorithm, has good performance in global convergence and stability and is superior to other compared algorithms.

  2. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  3. An efficient numerical method for the solution of the problem of elasticity for 3D-homogeneous elastic medium with cracks and inclusions

    NASA Astrophysics Data System (ADS)

    Kanaun, S.; Markov, A.

    2017-06-01

    An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.

  4. Multi-template image matching using alpha-rooted biquaternion phase correlation with application to logo recognition

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2011-06-01

    Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or image components. This joint processing capability can provide improved exploitation of the information contained in the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching approaches.

  5. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  6. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  7. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  8. Fully Dynamic Bin Packing

    NASA Astrophysics Data System (ADS)

    Ivković, Zoran; Lloyd, Errol L.

    Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.

  9. [Overweight and obesity prevalence estimates in a population from Zaragoza by using different growth references].

    PubMed

    Lasarte-Velillas, J J; Hernández-Aguilar, M T; Martínez-Boyero, T; Soria-Cabeza, G; Soria-Ruiz, D; Bastarós-García, J C; Gil-Hernández, I; Pastor-Arilla, C; Lasarte-Sanz, I

    2015-03-01

    To investigate the prevalence of overweight and obesity among our pediatric population and observe whether the use of different growth references for classification produce significant differences. A total of 35824 boys and girls aged between 2 and 14 years were included. Body mass index (BMI) was used to calculate the prevalence of overweight-obesity by age and sex. Prevalence was obtained by using a set of national references (Hernández's standards) and the references of World Health Organization (WHO standards). Prevalences were compared for each age and sex subset, as well as with the percentage of patients who had an overweight-obesity diagnosis in the clinical record. The overall prevalence of overweight-obesity among children aged 2 to 14 years was 17.0% (95% CI; 16.1%-18.0%) according to the Hernández standards vs 30.8% (95% CI; 29.9%-31.7%) according to WHO standards (10.1% vs 12.2% obese, and 6.9% vs 18.6% overweight). It was significantly higher in boys, by both standards, due to the higher prevalence of obesity. By using the Hernández standards the prevalence was significantly lower than by using WHO standards for all ages and for both sexes. A low percentage of patients were found to have an obesity-overweight diagnosis in the clinical record (from 3% to 22% at the ages of 2 and 14 years, respectively). The prevalence of overweight-obesity in our population is high, especially among boys. Using Hernández standards leads to an under-estimation of the problem, especially because it detects less overweight patients, thus we recommend using the WHO standards in our daily practice. The low number of overweight-obesity diagnoses in the clinical records might reflect that there is little awareness of the problem by the professionals. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  10. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  11. Beyond Λ CDM: Problems, solutions, and the road ahead

    NASA Astrophysics Data System (ADS)

    Bull, Philip; Akrami, Yashar; Adamek, Julian; Baker, Tessa; Bellini, Emilio; Beltrán Jiménez, Jose; Bentivegna, Eloisa; Camera, Stefano; Clesse, Sébastien; Davis, Jonathan H.; Di Dio, Enea; Enander, Jonas; Heavens, Alan; Heisenberg, Lavinia; Hu, Bin; Llinares, Claudio; Maartens, Roy; Mörtsell, Edvard; Nadathur, Seshadri; Noller, Johannes; Pasechnik, Roman; Pawlowski, Marcel S.; Pereira, Thiago S.; Quartin, Miguel; Ricciardone, Angelo; Riemer-Sørensen, Signe; Rinaldi, Massimiliano; Sakstein, Jeremy; Saltas, Ippocratis D.; Salzano, Vincenzo; Sawicki, Ignacy; Solomon, Adam R.; Spolyar, Douglas; Starkman, Glenn D.; Steer, Danièle; Tereno, Ismael; Verde, Licia; Villaescusa-Navarro, Francisco; von Strauss, Mikael; Winther, Hans A.

    2016-06-01

    Despite its continued observational successes, there is a persistent (and growing) interest in extending cosmology beyond the standard model, Λ CDM. This is motivated by a range of apparently serious theoretical issues, involving such questions as the cosmological constant problem, the particle nature of dark matter, the validity of general relativity on large scales, the existence of anomalies in the CMB and on small scales, and the predictivity and testability of the inflationary paradigm. In this paper, we summarize the current status of Λ CDM as a physical theory, and review investigations into possible alternatives along a number of different lines, with a particular focus on highlighting the most promising directions. While the fundamental problems are proving reluctant to yield, the study of alternative cosmologies has led to considerable progress, with much more to come if hopes about forthcoming high-precision observations and new theoretical ideas are fulfilled.

  12. The European community and its standardization efforts in medical informatics

    NASA Astrophysics Data System (ADS)

    Mattheus, Rudy A.

    1992-07-01

    A summary of the CEN TC 251/4 ''Medical Imaging and Multi-Media'' activities will be given. CEN is the European standardization institute, TC 251 deals with medical informatics. Standardization is a condition for the wide scale use of health care and medical informatics and for the creation of a common market. In the last two years, three important categories-- namely, the Commission of the European Communities with their programs and the mandates, the medical informaticians through their European professional federation, and the national normalization institutes through the European committee--have shown to be aware of this problem and have taken actions. As a result, a number of AIM (Advanced Informatics in Medicine), CEC sponsored projects, the CEC mandates to CEN and EWOS, the EFMI working group on standardization, the technical committee of CEN, and the working groups and project teams of CEN and EWOS are working on the subject. On overview of the CEN TC 251/4 ''Medical Imaging and Multi-Media'' activities will be given, including their relation to other work.

  13. Improving Accuracy and Relevance of Race/Ethnicity Data: Results of a Statewide Collaboration in Hawaii.

    PubMed

    Pellegrin, Karen L; Miyamura, Jill B; Ma, Carolyn; Taniguchi, Ronald

    2016-01-01

    Current race/ethnicity categories established by the U.S. Office of Management and Budget are neither reliable nor valid for understanding health disparities or for tracking improvements in this area. In Hawaii, statewide hospitals have collaborated to collect race/ethnicity data using a standardized method consistent with recommended practices that overcome the problems with the federal categories. The purpose of this observational study was to determine the impact of this collaboration on key measures of race/ethnicity documentation. After this collaborative effort, the number of standardized categories available across hospitals increased from 6 to 34, and the percent of inpatients with documented race/ethnicity increased from 88 to 96%. This improved standardized methodology is now the foundation for tracking population health indicators statewide and focusing quality improvement efforts. The approach used in Hawaii can serve as a model for other states and regions. Ultimately, the ability to standardize data collection methodology across states and regions will be needed to track improvements nationally.

  14. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  15. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  16. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  17. Probabilistic reasoning in data analysis.

    PubMed

    Sirovich, Lawrence

    2011-09-20

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.

  18. Integral definition of the logarithmic function and the derivative of the exponential function in calculus

    NASA Astrophysics Data System (ADS)

    Vaninsky, Alexander

    2015-04-01

    Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.

  19. Monotone Approximation for a Nonlinear Size and Class Age Structured Epidemic Model

    DTIC Science & Technology

    2006-02-22

    information if it does not display a currently valid OMB control number. 1. REPORT DATE 22 FEB 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to 00...follows from standard results, given the fact that they are all linear problems with local boundary conditions for Sinko-Streifer type systems. We...model, J. Franklin Inst., 297 (1974), 325-333. [14] K. E. Howard, A size and maturity structured model of cell dwarfism exhibiting chaotic be- havior

  20. Strategic and Tactical Decision-Making Under Uncertainty

    DTIC Science & Technology

    2006-01-03

    message passing algorithms. In recent work we applied this method to the problem of joint decoding of a low-density parity-check ( LDPC ) code and a partial...Joint Decoding of LDPC Codes and Partial-Response Channels." IEEE Transactions on Communications. Vol. 54, No. 7, 1149-1153, 2006. P. Pakzad and V...Michael I. Jordan PAGES U U U SAPR 20 19b. TELEPHONE NUMBER (Include area code ) 510/642-3806 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

  1. Cyber Deterrence: The Wrong Question for the Wrong Problem

    DTIC Science & Technology

    2018-04-20

    Form ApprovedREPORT DOCUMENTATION PAGE 0MB No. 0704-0788 The public reporting burden for this collection of information is estimated to average 1...valid 0MB control number, PI.EASE Do NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (flD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From...ude area code) 571-294-7290 Standard Form 298 Rev. 8/98) Prescribed by ANSI Std. Z39. 18 Approved for public release, distribution is unlimited

  2. Anomalous leptonic U(1) symmetry: Syndetic origin of the QCD axion, weak-scale dark matter, and radiative neutrino mass

    NASA Astrophysics Data System (ADS)

    Ma, Ernest; Restrepo, Diego; Zapata, Óscar

    2018-01-01

    The well-known leptonic U(1) symmetry of the Standard Model (SM) of quarks and leptons is extended to include a number of new fermions and scalars. The resulting theory has an invisible QCD axion (thereby solving the strong CP problem), a candidate for weak-scale dark matter (DM), as well as radiative neutrino masses. A possible key connection is a color-triplet scalar, which may be produced and detected at the Large Hadron Collider.

  3. The Math Gap: a description of the mathematics performance of preschool-aged deaf/hard-of-hearing children.

    PubMed

    Pagliaro, Claudia M; Kritzer, Karen L

    2013-04-01

    Over decades and across grade levels, deaf/hard-of-hearing (d/hh) student performance in mathematics has shown a gap in achievement. It is unclear, however, exactly when this gap begins to emerge and in what areas. This study describes preschool d/hh children's knowledge of early mathematics concepts. Both standardized and nonstandardized measures were used to assess understanding in number, geometry, measurement, problem solving, and patterns, reasoning and algebra. Results present strong evidence that d/hh students' difficulty in mathematics may begin prior to the start of formal schooling. Findings also show areas of strength (geometry) and weakness (problem solving and measurement) for these children. Evidence of poor foundational performance may relate to later academic achievement.

  4. Korean association of medical journal editors at the forefront of improving the quality and indexing chances of its member journals.

    PubMed

    Suh, Chang-Ok; Oh, Se Jeong; Hong, Sung-Tae

    2013-05-01

    The article overviews some achievements and problems of Korean medical journals published in the highly competitive journal environment. Activities of Korean Association of Medical Journal Editors (KAMJE) are viewed as instrumental for improving the quality of Korean articles, indexing large number of local journals in prestigious bibliographic databases and launching new abstract and citation tracking databases or platforms (eg KoreaMed, KoreaMed Synapse, the Western Pacific Regional Index Medicus [WPRIM]). KAMJE encourages its member journals to upgrade science editing standards and to legitimately increase citation rates, primarily by publishing more great articles with global influence. Experience gained by KAMJE and problems faced by Korean editors may have global implications.

  5. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  6. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  7. Obstructions to Existence in Fast-Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ana; Vazquez, Juan L.

    The study of nonlinear diffusion equations produces a number of peculiar phenomena not present in the standard linear theory. Thus, in the sub-field of very fast diffusion it is known that the Cauchy problem can be ill-posed, either because of non-uniqueness, or because of non-existence of solutions with small data. The equations we consider take the general form ut=( D( u, ux) ux) x or its several-dimension analogue. Fast diffusion means that D→∞ at some values of the arguments, typically as u→0 or ux→0. Here, we describe two different types of non-existence phenomena. Some fast-diffusion equations with very singular D do not allow for solutions with sign changes, while other equations admit only monotone solutions, no oscillations being allowed. The examples we give for both types of anomaly are closely related. The most typical examples are vt=( vx/∣ v∣) x and ut= uxx/∣ ux∣. For these equations, we investigate what happens to the Cauchy problem when we take incompatible initial data and perform a standard regularization. It is shown that the limit gives rise to an initial layer where the data become admissible (positive or monotone, respectively), followed by a standard evolution for all t>0, once the obstruction has been removed.

  8. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  9. Object-based neglect in number processing

    PubMed Central

    2013-01-01

    Recent evidence suggests that neglect patients seem to have particular problems representing relatively smaller numbers corresponding to the left part of the mental number line. However, while this indicates space-based neglect for representational number space little is known about whether and - if so - how object-based neglect influences number processing. To evaluate influences of object-based neglect in numerical cognition, a group of neglect patients and two control groups had to compare two-digit numbers to an internally represented standard. Conceptualizing two-digit numbers as objects of which the left part (i.e., the tens digit should be specifically neglected) we were able to evaluate object-based neglect for number magnitude processing. Object-based neglect was indicated by a larger unit-decade compatibility effect actually reflecting impaired processing of the leftward tens digits. Additionally, faster processing of within- as compared to between-decade items provided further evidence suggesting particular difficulties in integrating tens and units into the place-value structure of the Arabic number system. In summary, the present study indicates that, in addition to the spatial representation of number magnitude, also the processing of place-value information of multi-digit numbers seems specifically impaired in neglect patients. PMID:23343126

  10. Performativity Double Standards and the Sexual Orientation Climate at a Southern Liberal Arts University.

    PubMed

    Byron, Reginald A; Lowe, Maria R; Billingsley, Brianna; Tuttle, Nathan

    2017-01-01

    This study employs quantitative and qualitative methods to examine how heterosexual, bisexual, and gay students rate and describe a Southern, religiously affiliated university's sexual orientation climate. Using qualitative data, queer theory, and the concept tyranny of sexualized spaces, we explain why non-heterosexual students have more negative perceptions of the university climate than heterosexual male students, in both bivariate and multivariate analyses. Although heterosexual students see few problems with the campus sexual orientation climate, bisexual men and women describe being challenged on the authenticity of their orientation, and lesbian and, to a greater extent, gay male students report harassment and exclusion in a number of settings. These distinct processes are influenced by broader heteronormative standards. We also shed much-needed light on how gendered sexual performativity double standards within an important campus microclimate (fraternity parties) contribute to creating a tyrannical sexualized space and negatively affect overall campus climate perceptions.

  11. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1990-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.

  12. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  13. Comparison of Standardized Test Scores from Traditional Classrooms and Those Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Needham, Martha Elaine

    2010-01-01

    This research compares differences between standardized test scores in problem-based learning (PBL) classrooms and a traditional classroom for 6th grade students using a mixed-method, quasi-experimental and qualitative design. The research shows that problem-based learning is as effective as traditional teaching methods on standardized tests. The…

  14. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  15. The modeler's influence on calculated solubilities for performance assessments at the Aspo Hard-rock Laboratory

    USGS Publications Warehouse

    Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.

  16. Facilitating students' application of the integral and the area under the curve concepts in physics problems

    NASA Astrophysics Data System (ADS)

    Nguyen, Dong-Hai

    This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism. In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students' difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties. In phase II of the project, tutorials were created to facilitate students' learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve concepts to physics problems. The results of this project provide broader and deeper insights into students' problem solving with the integral and the area under the curve concepts and suggest strategies to facilitate students' learning to apply these concepts to physics problems. This study also has significant implications for further research, curriculum development and instruction.

  17. Supersymmetry models and phenomenology

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.

    We present several models of supersymmetry breaking and explore their phenomenological consequences. First, we build models utilizing the supersymmetry breaking formalism of anomaly mediation. Our first model consists of the minimal supersymmetric standard model plus a singlet, anomaly-mediated soft masses and a Dirac mass which marries the bino to the singlet. The Dirac mass does not affect the so-called "UV insensitivity" of the other soft parameters to running or supersymmetric thresholds and thus flavor physics at intermediate scales would not reintroduce the flavor problem. The Dirac bino is integrated out at a few TeV and produces finite and positive contributions to all hyper-charged scalars at one loop thus producing positive squared slepton masses. Our second model approaches anomaly mediation from the point of view of the mu problem. We present a minimal method for generating a mu term while still generating a viable spectrum. We introduce a new operator involving a hidden sector U(1) gauge field which is then canceled against a Giudice-Masiero-like mu term. No new flavor violating operators are allowed. This procedure produces viable electroweak symmetry breaking in the Higgs sector. Only a single pair of new vector-like messenger fields is needed to correct the slepton masses by deflecting them from their anomaly mediated trajectories. Finally we attempt to solve the Higgs mass tuning problem in the MSSM; both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer the mass of the Higgs boson to be around the Z mass. However, LEP II rules out a standard model-like Higgs lighter than 114.4 GeV. We show that supersymmetric models with R parity violation have a large range of parameter space in which the Higgs effectively decays to six jets (for Baryon number violation) or four jets plus taus and/or missing energy (for Lepton number violation). These decays are much more weakly constrained by current LEP analyses and could be probed by new exclusive channel analyses as well as a combined "model independent" Higgs search analysis by all experiments.

  18. The contribution of neighbouring countries to pesticide levels in Dutch surface waters.

    PubMed

    Van 'T Zelfde, M; Tamis, W L M; Vijver, M G; De Snoo, G R

    2011-01-01

    Compared with other European countries, Dutch consumption of pesticides is high, particularly in agriculture, with many of the compounds found in surface waters in high concentrations and various standards being exceeded. Surface water quality is routinely monitored and the data obtained are published in the Dutch Pesticides Atlas. One important mechanism for reducing pesticide levels in surface waters is authorisation policy, which proceeds on the assumption that the pollution concerned has taken place in the Netherlands. The country straddles the delta of several major European rivers, however, and as river basins do not respect national borders some of the water quality problems will derive from neighbouring countries. Against this background the general question addressed in this article is the following: To what extent do countries neighbouring on the Netherlands contribute to pesticide pollution of Dutch surface waters? To answer this question, data from the Pesticides Atlas for the period 2005-2009 were used. Border zones with Belgium and Germany were defined and the data for these zones compared with Dutch data. In the analyses, due allowance was also made for authorised and non-authorised compounds and for differences between flowing and stagnant waters. Monitoring efforts in the border zones and in the Netherlands were also characterised, showing that efforts in the former are similar to those in the rest of the country. In the border zone with Belgium the relative number of non-authorised pesticides exceeding the standards is clearly higher than in the rest of the Netherlands. These exceedances are observed mainly in flowing waters. In contrast, there is no difference in the relative number of standard-exceeding measurements between the border zones and the rest of the Netherlands. In the boundary zones the array of standard-exceeding compounds clearly deviates from that in the rest of the Netherlands, with compounds authorised in the neighbouring countries but not in the Netherlands, such as flufenacet, featuring prominently. The share of the neighbouring countries in the total number of exceedances in the Netherlands is roughly proportional to the relative area of the border zones. Although there is a certain influx of pesticides from across national borders, the magnitude of the problem appears to be limited.

  19. Automatic peak selection by a Benjamini-Hochberg-based algorithm.

    PubMed

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.

  20. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    PubMed Central

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147

  1. Embedding Number-Combinations Practice Within Word-Problem Tutoring

    PubMed Central

    Powell, Sarah R.; Fuchs, Lynn S.; Fuchs, Douglas

    2012-01-01

    Two aspects of mathematics with which students with mathematics learning difficulty (MLD) often struggle are word problems and number-combination skills. This article describes a math program in which students receive instruction on using algebraic equations to represent the underlying problem structure for three word-problem types. Students also learn counting strategies for answering number combinations that they cannot retrieve from memory. Results from randomized-control trials indicated that embedding the counting strategies for number combinations produces superior word-problem and number-combination outcomes for students with MLD beyond tutoring programs that focus exclusively on number combinations or word problems. PMID:22661880

  2. Embedding Number-Combinations Practice Within Word-Problem Tutoring.

    PubMed

    Powell, Sarah R; Fuchs, Lynn S; Fuchs, Douglas

    2010-09-01

    Two aspects of mathematics with which students with mathematics learning difficulty (MLD) often struggle are word problems and number-combination skills. This article describes a math program in which students receive instruction on using algebraic equations to represent the underlying problem structure for three word-problem types. Students also learn counting strategies for answering number combinations that they cannot retrieve from memory. Results from randomized-control trials indicated that embedding the counting strategies for number combinations produces superior word-problem and number-combination outcomes for students with MLD beyond tutoring programs that focus exclusively on number combinations or word problems.

  3. A Natural Fit: Problem-based Learning and Technology Standards.

    ERIC Educational Resources Information Center

    Sage, Sara M.

    2000-01-01

    Discusses the use of problem-based learning to meet technology standards. Highlights include technology as a tool for locating and organizing information; the Wolf Wars problem for elementary and secondary school students that provides resources, including Web sites, for information; Web-based problems; and technology as assessment and as a…

  4. Assessing oral health-related quality of life in general dental practice in Scotland: validation of the OHIP-14.

    PubMed

    Fernandes, Marcelo José; Ruta, Danny Adolph; Ogden, Graham Richard; Pitts, Nigel Berry; Ogston, Simon Alexander

    2006-02-01

    To validate the Oral Health Impact Profile (OHIP)-14 in a sample of patients attending general dental practice. Patients with pathology-free impacted wisdom teeth were recruited from six general dental practices in Tayside, Scotland, and followed for a year to assess the development of problems related to impaction. The OHIP-14 was completed at baseline and at 1-year follow-up, and analysed using three different scoring methods: a summary score, a weighted and standardized score and the total number of problems reported. Instrument reliability was measured by assessing internal consistency and test-retest reliability. Construct validity was assessed using a number of variables. Linear regression was then used to model the relationship between OHIP-14 and all significantly correlated variables. Responsiveness was measured using the standardized response mean (SRM). Adjusted R(2)s and SRMs were calculated for each of the three scoring methods. Estimates for the differences between adjusted R(2)s and the differences between SRMs were obtained with 95% confidence intervals. A total of 278 and 169 patients completed the questionnaire at baseline and follow-up, respectively. Reliability - Cronbach's alpha coefficients ranged from 0.30 to 0.75. Alpha coefficients for all 14 items were 0.88 and 0.87 for baseline and follow-up, respectively. Test-retest coefficients ranged from 0.72 to 0.78. Validity - OHIP-14 scores were significantly correlated with number of teeth, education, main activity, the use of mouthwash, frequency of seeing a dentist, the reason for the last dental appointment, smoking, alcohol intake, pain and symptoms. Adjusted R(2)s ranged from 0.123 to 0.202 and there were no statistically significant differences between those for the three different scoring methods. Responsiveness - The SRMs ranged from 0.37 to 0.56 and there was a statistically significant difference between the summary scores method and the total number of problems method for symptomatic patients. The OHIP-14 is a valid and reliable measure of oral health-related quality of life in general dental practice and is responsive to third molar clinical change. The summary score method demonstrated performance as good as, or better than, the other methods studied.

  5. Integrative review of research on general health status and prevalence of common physical health conditions of women after childbirth.

    PubMed

    Cheng, Ching-Yu; Li, Qing

    2008-01-01

    Postpartum mothers experience certain physical health conditions that may affect their quality of life, future health, and health of their children. Yet, the physical health of postpartum mothers is relatively neglected in both research and practice. The purpose of this review is to describe the general health status and prevalence of common physical health conditions of postpartum mothers. The review followed standard procedures for integrative literature reviews. Twenty-two articles were reviewed from searches in scientific databases, reference lists, and an up-to-date survey. Three tables were designed to answer review questions. In general, postpartum mothers self-rate their health as good. They experience certain physical conditions such as fatigue/physical exhaustion, sleep-related problems, pain, sex-related concerns, hemorrhoids/constipation, and breast problems. Despite a limited number of studies, the findings provide a glimpse of the presence of a number of physical health conditions experienced by women in the 2 years postpartum. In the articles reviewed, physical health conditions and postpartum period were poorly defined, no standard scales existed, and the administration of surveys varied widely in time. Those disparities prevented systematic comparisons of results and made it difficult to gain a coherent understanding of the physical health conditions of postpartum mothers. More longitudinal research is needed that focuses on the etiology, predictors, and management of the health conditions most prevalent among postpartum mothers. Instruments are needed that target a broader range of physical conditions in respect to type and severity.

  6. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  7. Irrational exuberance for resolved species trees.

    PubMed

    Hahn, Matthew W; Nakhleh, Luay

    2016-01-01

    Phylogenomics has largely succeeded in its aim of accurately inferring species trees, even when there are high levels of discordance among individual gene trees. These resolved species trees can be used to ask many questions about trait evolution, including the direction of change and number of times traits have evolved. However, the mapping of traits onto trees generally uses only a single representation of the species tree, ignoring variation in the gene trees used to construct it. Recognizing that genes underlie traits, these results imply that many traits follow topologies that are discordant with the species topology. As a consequence, standard methods for character mapping will incorrectly infer the number of times a trait has evolved. This phenomenon, dubbed "hemiplasy," poses many problems in analyses of character evolution. Here we outline these problems, explaining where and when they are likely to occur. We offer several ways in which the possible presence of hemiplasy can be diagnosed, and discuss multiple approaches to dealing with the problems presented by underlying gene tree discordance when carrying out character mapping. Finally, we discuss the implications of hemiplasy for general phylogenetic inference, including the possible drawbacks of the widespread push for "resolved" species trees. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  8. Validation of an association rule mining-based method to infer associations between medications and problems.

    PubMed

    Wright, A; McCoy, A; Henkin, S; Flaherty, M; Sittig, D

    2013-01-01

    In a prior study, we developed methods for automatically identifying associations between medications and problems using association rule mining on a large clinical data warehouse and validated these methods at a single site which used a self-developed electronic health record. To demonstrate the generalizability of these methods by validating them at an external site. We received data on medications and problems for 263,597 patients from the University of Texas Health Science Center at Houston Faculty Practice, an ambulatory practice that uses the Allscripts Enterprise commercial electronic health record product. We then conducted association rule mining to identify associated pairs of medications and problems and characterized these associations with five measures of interestingness: support, confidence, chi-square, interest and conviction and compared the top-ranked pairs to a gold standard. 25,088 medication-problem pairs were identified that exceeded our confidence and support thresholds. An analysis of the top 500 pairs according to each measure of interestingness showed a high degree of accuracy for highly-ranked pairs. The same technique was successfully employed at the University of Texas and accuracy was comparable to our previous results. Top associations included many medications that are highly specific for a particular problem as well as a large number of common, accurate medication-problem pairs that reflect practice patterns.

  9. Optimal Sampling to Provide User-Specific Climate Information.

    NASA Astrophysics Data System (ADS)

    Panturat, Suwanna

    The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.

  10. Generating and using truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2012-01-01

    The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.

  11. Report on JANNAF panel on shotgun/relative quickness testing

    NASA Technical Reports Server (NTRS)

    Gould, R. A.

    1980-01-01

    As the need for more energetic solid propellants continues, a number of problems arises. One of these is the tendency of high energy propellants to transition from burning (deflagration) to detonation in regions where the propellant is present in small particle sizes; e.g., in case bonding areas of a motor after a rapid depressurization causes a shear zone at the bond interface as the stressed propellant and motor case relax at different rates. In an effort to determine the susceptibility of propellants to high strain rate break up (friability), and subsequent DDT, the propulsion community uses the shotgun/relative quickness test as one of a number of screening tests for new propellant formulations. Efforts to standardize test techniques and equipment are described.

  12. Prevention and treatment of long-term social disability amongst young people with emerging severe mental illness with social recovery therapy (The PRODIGY Trial): study protocol for a randomised controlled trial.

    PubMed

    Fowler, David; French, Paul; Banerjee, Robin; Barton, Garry; Berry, Clio; Byrne, Rory; Clarke, Timothy; Fraser, Rick; Gee, Brioney; Greenwood, Kathryn; Notley, Caitlin; Parker, Sophie; Shepstone, Lee; Wilson, Jon; Yung, Alison R; Hodgekins, Joanne

    2017-07-11

    Young people who have social disability associated with severe and complex mental health problems are an important group in need of early intervention. Their problems often date back to childhood and become chronic at an early age. Without intervention, the long-term prognosis is often poor and the economic costs very large. There is a major gap in the provision of evidence-based interventions for this group, and therefore new approaches to detection and intervention are needed. This trial provides a definitive evaluation of a new approach to early intervention with young people with social disability and severe and complex mental health problems using social recovery therapy (SRT) over a period of 9 months to improve mental health and social recovery outcomes. This is a pragmatic, multi-centre, single blind, superiority randomised controlled trial. It is conducted in three sites in the UK: Sussex, Manchester and East Anglia. Participants are aged 16 to 25 and have both persistent and severe social disability (defined as engaged in less than 30 hours per week of structured activity) and severe and complex mental health problems. The target sample size is 270 participants, providing 135 participants in each trial arm. Participants are randomised 1:1 using a web-based randomisation system and allocated to either SRT plus optimised treatment as usual (enhanced standard care) or enhanced standard care alone. The primary outcome is time use, namely hours spent in structured activity per week at 15 months post-randomisation. Secondary outcomes assess typical mental health problems of the group, including subthreshold psychotic symptoms, negative symptoms, depression and anxiety. Time use, secondary outcomes and health economic measures are assessed at 9, 15 and 24 months post-randomisation. This definitive trial will be the first to evaluate a novel psychological treatment for social disability and mental health problems in young people presenting with social disability and severe and complex non-psychotic mental health problems. The results will have important implications for policy and practice in the detection and early intervention for this group in mental health services. Trial Registry: International Standard Randomised Controlled Trial Number (ISRCTN) Registry. ISRCTN47998710 (registered 29/11/2012).

  13. Standardized Definitions for Code Verification Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  14. The effect of problem structure on problem-solving: an fMRI study of word versus number problems.

    PubMed

    Newman, Sharlene D; Willoughby, Gregory; Pruce, Benjamin

    2011-09-02

    It has long been thought that word problems are more difficult to solve than number/equation problems. However, recent findings have begun to bring this broadly believed idea into question. The current study examined the processing differences between these two types of problems. The behavioral results presented here failed to show an overwhelming advantage for number problems. In fact, there were more errors for the number problems than the word problems. The neuroimaging results reported demonstrate that there is significant overlap in the processing of what, on the surface, appears to be completely different problems that elicit different problem-solving strategies. Word and number problems rely on a general network responsible for problem-solving that includes the superior posterior parietal cortex, the horizontal segment of the intraparietal sulcus which is hypothesized to be involved in problem representation and calculation as well as the regions that have been linked to executive aspects of working memory such as the pre-SMA and basal ganglia. While overlap was observed, significant differences were also found primarily in language processing regions such as Broca's and Wernicke's areas for the word problems and the horizontal segment of the intraparietal sulcus for the number problems. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Protein folding: the optically induced electronic excitations model

    NASA Astrophysics Data System (ADS)

    Jeknić-Dugić, J.

    2009-07-01

    The large-molecules conformational transitions problem (the 'protein folding problem') is an open issue of vivid current science research work of fundamental importance for a number of modern science disciplines as well as for nanotechnology. Here, we elaborate the recently proposed quantum-decoherence-based approach to the issue. First, we emphasize a need for detecting the elementary quantum mechanical processes (whose combinations may give a proper description of the realistic experimental situations) and then we design such a model. As distinct from the standard approach that deals with the conformation system, we investigate the optically induced transitions in the molecule electrons system that, in effect, may give rise to a conformation change in the molecule. Our conclusion is that such a model may describe the comparatively slow conformational transitions.

  16. [General organizational issues in disaster health response].

    PubMed

    Pacifici, L E; Riccardo, F; De Rosa, A G; Pacini, A; Nardi, L; Russo, G; Scaroni, E

    2007-01-01

    Recent studies show how in the 2004-2005 period there has been an increase in natural disasters of 18% worldwide. According to a renowned author planning for disaster response is as valid as the starting hypothesis. The study of an inductive mental process in disaster response planning is the key to avoiding the invention and re-invention of the wheel for each emergency. Research in this field however is hampered by different factors one of which is data collection that during disaster response requires specific training. Standardization of data collection models with limitation of the number of variables is required as is taking into account problems related to people migration and subsequent sampling problems and retrospective analysis. Moreover poor attention to the training of the volunteers employed on the field is an issue to be considered.

  17. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  18. Development of a database of health insurance claims: standardization of disease classifications and anonymous record linkage.

    PubMed

    Kimura, Shinya; Sato, Toshihiko; Ikeda, Shunya; Noda, Mitsuhiko; Nakayama, Takeo

    2010-01-01

    Health insurance claims (ie, receipts) record patient health care treatments and expenses and, although created for the health care payment system, are potentially useful for research. Combining different types of receipts generated for the same patient would dramatically increase the utility of these receipts. However, technical problems, including standardization of disease names and classifications, and anonymous linkage of individual receipts, must be addressed. In collaboration with health insurance societies, all information from receipts (inpatient, outpatient, and pharmacy) was collected. To standardize disease names and classifications, we developed a computer-aided post-entry standardization method using a disease name dictionary based on International Classification of Diseases (ICD)-10 classifications. We also developed an anonymous linkage system by using an encryption code generated from a combination of hash values and stream ciphers. Using different sets of the original data (data set 1: insurance certificate number, name, and sex; data set 2: insurance certificate number, date of birth, and relationship status), we compared the percentage of successful record matches obtained by using data set 1 to generate key codes with the percentage obtained when both data sets were used. The dictionary's automatic conversion of disease names successfully standardized 98.1% of approximately 2 million new receipts entered into the database. The percentage of anonymous matches was higher for the combined data sets (98.0%) than for data set 1 (88.5%). The use of standardized disease classifications and anonymous record linkage substantially contributed to the construction of a large, chronologically organized database of receipts. This database is expected to aid in epidemiologic and health services research using receipt information.

  19. Double-blind, placebo-controlled food challenge in adults in everyday clinical practice: a reappraisal of their limitations and real indications.

    PubMed

    Asero, Riccardo; Fernandez-Rivas, Montserrat; Knulst, André C; Bruijnzeel-Koomen, Carla Afm

    2009-08-01

    The double-blind, placebo-controlled food challenge (DBPCFC) is widely considered as the 'gold standard' for the diagnosis of food allergy. However, in adult patients, this procedure is rather rarely performed outside the academic context. This review article aims to reappraise the pros and cons of DBPCFC and to elicit some critical thoughts and discussions about the real indications of this diagnostic procedure in adult patients in everyday practice. There are many data showing that the DBPCFC poses a number of critical problems that are difficult to overcome in normal outpatient clinics and hospitals, and that are generally not addressed in most articles dealing with this issue. Performing DBPCFC poses a number of practical problems and has several pitfalls, which make its routine use in normal clinical settings generally impossible. This review article shows that the need for this procedure in adult patients seems in effect very little and specifies new, more limited indications to its use in everyday practice. Further, it suggests a role for the open challenge, which lacks several of the disadvantages of DBPCFC.

  20. Do early internalizing and externalizing problems predict later irritability in adolescents with attention-deficit/hyperactivity disorder?

    PubMed

    Mulraney, Melissa; Zendarski, Nardia; Mensah, Fiona; Hiscock, Harriet; Sciberras, Emma

    2017-04-01

    Irritable mood is common in children with attention-deficit/hyperactivity disorder. Research to date has primarily comprised cross-sectional studies; thus, little is known about the antecedents of irritability. Furthermore, existing cross-sectional studies generally focus on the association between irritability and comorbidities and do not examine broader aspects of functioning. Finally, previous research has neglected to include child-report of irritability. This study aimed to address these gaps using data from a longitudinal study of children with attention-deficit/hyperactivity disorder. Children aged 5-13 years (mean = 10.2; standard deviation = 1.9) with attention-deficit/hyperactivity disorder were recruited from pediatric practices across Victoria, Australia. This study reports on those who had reached adolescence (12 years or older, mean = 13.8; standard deviation = 1.2) at the 3-year follow-up ( n = 140). Internalizing and externalizing problems were measured using the Strengths and Difficulties Questionnaire. At follow-up, parent-reported and adolescent self-reported irritability was assessed using the Affective Reactivity Index. Parent and adolescent outcomes measured at follow-up included attention-deficit/hyperactivity disorder symptom severity, sleep, behavior and parent mental health. Children with externalizing problems at age 10 had higher parent-reported irritability (β = 0.31, 95% confidence interval = [0.17,-0.45], p = 0.001) in adolescence. Cross-sectional analyses found that irritability was associated with increased attention-deficit/hyperactivity disorder symptom severity and sleep problems; poorer emotional, behavioral and social functioning; and poorer parent mental health. Our findings highlight the importance of assessing for and managing early conduct problems in children with attention-deficit/hyperactivity disorder, as these predict ongoing irritability which, in turn, is associated with poorer functioning across a number of domains.

  1. Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.

    PubMed

    Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid

    2007-10-18

    The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.

  2. Optimization of cDNA microarrays procedures using criteria that do not rely on external standards

    PubMed Central

    Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid

    2007-01-01

    Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480

  3. Setting the stage for chronic health problems: cumulative childhood adversity among homeless adults with mental illness in Vancouver, British Columbia.

    PubMed

    Patterson, Michelle L; Moniruzzaman, Akm; Somers, Julian M

    2014-04-12

    It is well documented that childhood abuse, neglect and household dysfunction are disproportionately present in the backgrounds of homeless adults, and that these experiences adversely impact child development and a wide range of adult outcomes. However, few studies have examined the cumulative impact of adverse childhood experiences on homeless adults with mental illness. This study examines adverse events in childhood as predictors of duration of homelessness, psychiatric and substance use disorders, and physical health in a sample of homeless adults with mental illness. This study was conducted using baseline data from a randomized controlled trial in Vancouver, British Columbia for participants who completed the Adverse Childhood Experiences (ACE) scale at 18 months follow-up (n=364). Primary outcomes included current mental disorders; substance use including type, frequency and severity; physical health; duration of homelessness; and vocational functioning. In multivariable regression models, ACE total score independently predicted a range of mental health, physical health, and substance use problems, and marginally predicted duration of homelessness. Adverse childhood experiences are overrepresented among homeless adults with complex comorbidities and chronic homelessness. Our findings are consistent with a growing body of literature indicating that childhood traumas are potent risk factors for a number of adult health and psychiatric problems, particularly substance use problems. Results are discussed in the context of cumulative adversity and self-trauma theory. This trial has been registered with the International Standard Randomized Control Trial Number Register and assigned ISRCTN42520374.

  4. An Overview of the Challenges With and Proposed Solutions for the Ingest and Distribution Processes for Airborne Data Management

    NASA Technical Reports Server (NTRS)

    Beach, Aubrey; Northup, Emily; Early, Amanda; Wang, Dali; Kusterer, John; Quam, Brandi; Chen, Gao

    2015-01-01

    The current data management practices for NASA airborne field projects have successfully served science team data needs over the past 30 years to achieve project science objectives, however, users have discovered a number of issues in terms of data reporting and format. The ICARTT format, a NASA standard since 2010, is currently the most popular among the airborne measurement community. Although easy for humans to use, the format standard is not sufficiently rigorous to be machine-readable. This makes data use and management tedious and resource intensive, and also create problems in Distributed Active Archive Center (DAAC) data ingest procedures and distribution. Further, most DAACs use metadata models that concentrate on satellite data observations, making them less prepared to deal with airborne data.

  5. The heavy top quark and supersymmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, L.J.

    1997-01-01

    Three aspects of supersymmetric theories are discussed: electroweak symmetry breaking, the issues of flavor, and gauge unification. The heavy top quark plays an important, sometimes dominant, role in each case. Additional symmetries lead to extensions of the Standard Model which can provide an understanding for many of the outstanding problems of particle physics. A broken supersymmetric extension of spacetime allows electroweak symmetry breaking to follow from the dynamics of the heavy top quark; an extension of isospin provides a constrained framework for understanding the pattern of quark and lepton masses; and a grand unified extension of the Standard Model gaugemore » group provides an elegant understanding of the gauge quantum numbers of the components of a generation. Experimental signatures for each of these additional symmetries are discussed.« less

  6. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  7. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  8. Replica analysis of overfitting in regression models for time-to-event data

    NASA Astrophysics Data System (ADS)

    Coolen, A. C. C.; Barrett, J. E.; Paga, P.; Perez-Vicente, C. J.

    2017-09-01

    Overfitting, which happens when the number of parameters in a model is too large compared to the number of data points available for determining these parameters, is a serious and growing problem in survival analysis. While modern medicine presents us with data of unprecedented dimensionality, these data cannot yet be used effectively for clinical outcome prediction. Standard error measures in maximum likelihood regression, such as p-values and z-scores, are blind to overfitting, and even for Cox’s proportional hazards model (the main tool of medical statisticians), one finds in literature only rules of thumb on the number of samples required to avoid overfitting. In this paper we present a mathematical theory of overfitting in regression models for time-to-event data, which aims to increase our quantitative understanding of the problem and provide practical tools with which to correct regression outcomes for the impact of overfitting. It is based on the replica method, a statistical mechanical technique for the analysis of heterogeneous many-variable systems that has been used successfully for several decades in physics, biology, and computer science, but not yet in medical statistics. We develop the theory initially for arbitrary regression models for time-to-event data, and verify its predictions in detail for the popular Cox model.

  9. Energy configuration optimization of submerged propeller in oxidation ditch based on CFD

    NASA Astrophysics Data System (ADS)

    Wu, S. Y.; Zhou, D. Q.; Zheng, Y.

    2012-11-01

    The submerged propeller is presented as an important dynamic source in oxidation ditch. In order to guarantee the activated sludge not deposit, it is necessary to own adequate drive power. Otherwise, it will cause many problems such as the awful mixed flow and the great consuming of energy. At present, carrying on the installation optimization of submerged propeller in oxidation ditch mostly depends on experience. So it is necessary to use modern design method to optimize the installation position and number of submerged propeller, and to research submerged propeller flow field characteristics. The submerged propeller internal flow is simulated by using CFD software FLUENT6.3. Based on Navier-Stokes equations and standard k - ɛ turbulence model, the flow was simulated by using a SIMPLE algorithm. The results indicate that the submerged propeller installation position change could avoid the condition of back mixing, which caused by the strong drive. Besides, the problem of sludge deposit and the low velocity in the bend which caused by the drive power attenuation could be solved. By adjusting the submerged propeller number, the least power density that the mixing drive needed could be determined and saving energy purpose could be achieved. The study can provide theoretical guidance for optimize the submerged propeller installation position and determine submerged propeller number.

  10. The Epidemiological Scale of Alzheimer’s Disease

    PubMed Central

    Cornutiu, Gavril

    2015-01-01

    Alzheimer’s disease (AD) has increased from a few cases in a country at the beginning of the 20th century to an incidence of recording a case every 7 seconds in the world. From a rare disease it has reached the top 8 of major health problems in the world. One of the epidemiological problems of AD is the fact that authors from different countries use different reporting units. Some report numbers to 100,000 inhabitants, others to 1,000 inhabitants and others report the total number of cases in a country. Standardization of these reports is strictly necessary. The rise in incidence and prevalence with age is known, but interesting to see is that the incidence and prevalence do not rise in a parallel manner with age as simple logic would assume. Between the ages of 60 and 90, the incidence in men increases two times and in women 41 times, prevalence increase in men is 55.25-fold and in women 77-fold. Regarding the women/men ratio, the incidence is 20.5-fold increased, and prevalence is merely 1.3936-fold increased. These numbers raise concerns about the evolution of the disease. Regarding mild cognitive impairment (MCI)/AD ratio, only about 1 in 2 people get AD (raising?) issues about the pathogenic disease relatedness. PMID:26251678

  11. Comparison of self-reported emotional and behavioural problems in adolescents from Greece and Finland.

    PubMed

    Kapi, Aikaterini; Veltsista, Alexandra; Sovio, Ulla; Järvelin, Marjo-Riitta; Bakoula, Chryssa

    2007-08-01

    To compare self-reported emotional and behavioural problems among Greek and Finnish adolescents. Youth Self-Report scores were analysed for 3373 Greek adolescents aged 18 years and 7039 Finnish adolescents aged 15-16 years from the general population in both countries. The impact of country, gender, place of residence, socioeconomic status (SES) and family stability on the scores was evaluated. Only country and gender yielded small to medium effect on the scores. Greek boys scored significantly higher than Finns on 10 of the 11 YSR syndromes, particularly on the anxious/depressed scale. Greek girls scored significantly lower than Finnish girls on the somatic complaints and delinquent behaviour scales. In general, girls scored higher than boys on both internalising and externalising problems. The gender by country interaction revealed that Finnish girls reported more externalising problems. The main differences marked in this comparison were the higher level of anxiety and depression in Greeks than Finns and the higher level of externalising problems in Finnish girls than boys. Cultural standards could play an important role in explaining these differences. Overall, it seems that only a small number of differences exist between a northern and southern European region.

  12. Youth Top Problems: using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy.

    PubMed

    Weisz, John R; Chorpita, Bruce F; Frye, Alice; Ng, Mei Yi; Lau, Nancy; Bearman, Sarah Kate; Ugueto, Ana M; Langer, David A; Hoagwood, Kimberly E

    2011-06-01

    To complement standardized measurement of symptoms, we developed and tested an efficient strategy for identifying (before treatment) and repeatedly assessing (during treatment) the problems identified as most important by caregivers and youths in psychotherapy. A total of 178 outpatient-referred youths, 7-13 years of age, and their caregivers separately identified the 3 problems of greatest concern to them at pretreatment and then rated the severity of those problems weekly during treatment. The Top Problems measure thus formed was evaluated for (a) whether it added to the information obtained through empirically derived standardized measures (e.g., the Child Behavior Checklist [CBCL; Achenbach & Rescorla, 2001] and the Youth Self-Report [YSR; Achenbach & Rescorla, 2001]) and (b) whether it met conventional psychometric standards. The problems identified were significant and clinically relevant; most matched CBCL/YSR items while adding specificity. The top problems also complemented the information yield of the CBCL/YSR; for example, for 41% of caregivers and 79% of youths, the identified top problems did not correspond to any items of any narrowband scales in the clinical range. Evidence on test-retest reliability, convergent and discriminant validity, sensitivity to change, slope reliability, and the association of Top Problems slopes with standardized measure slopes supported the psychometric strength of the measure. The Top Problems measure appears to be a psychometrically sound, client-guided approach that complements empirically derived standardized assessment; the approach can help focus attention and treatment planning on the problems that youths and caregivers consider most important and can generate evidence on trajectories of change in those problems during treatment. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  13. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  14. Proficiency Standards and Cut-Scores for Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…

  15. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  16. Cross support overview and operations concept for future space missions

    NASA Technical Reports Server (NTRS)

    Stallings, William; Kaufeler, Jean-Francois

    1994-01-01

    Ground networks must respond to the requirements of future missions, which include smaller sizes, tighter budgets, increased numbers, and shorter development schedules. The Consultative Committee for Space Data Systems (CCSDS) is meeting these challenges by developing a general cross support concept, reference model, and service specifications for Space Link Extension services for space missions involving cross support among Space Agencies. This paper identifies and bounds the problem, describes the need to extend Space Link services, gives an overview of the operations concept, and introduces complimentary CCSDS work on standardizing Space Link Extension services.

  17. Coordinated Guidance Strategy for Multiple USVs During Maritime Interdiction Operations

    DTIC Science & Technology

    2017-09-01

    ADDRESS(ES) N /A 10. SPONSORING / MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the...P i P i P i i P i ia V N Vα θ= =  , (4) where N is the navigation gain. The problem of having a moving target, as opposed to a stationary...00 2 Pθ α− when 2N = , and approaches 0θ when N →∞ . It could also be noted that using the standard PPN, a significant portion of the angular

  18. Inductive High Power Transfer Technologies for Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Madzharov, Nikolay D.; Tonchev, Anton T.

    2014-03-01

    Problems associated with "how to charge the battery pack of the electric vehicle" become more important every passing day. Most logical solution currently is the non-contact method of charge, possessing a number of advantages over standard contact methods for charging. This article focuses on methods for Inductive high power contact-less transfer of energy at relatively small distances, their advantages and disadvantages. Described is a developed Inductive Power Transfer (IPT) system for fast charging of electric vehicles with nominal power of 30 kW over 7 to 9 cm air gap.

  19. A Study of the Efficacy of the Five Phase Recovery Process as a Method of Maximizing Reimbursements under the Third Party Collection Program

    DTIC Science & Technology

    1991-07-01

    MTF, or establishing a standard CGR should be done cautiously since health care is a local phenomenon and a number of factors influence (1) the...goals should be formulated locally. That is, each MTF must identify for itself those beneficiaries in its catchment area with billable insurance who...this, managers must know the work they supervise. 8. Drive out fear. Employees should not be afraid to point out problems for fear of argument or being

  20. When procedures discourage insight: epistemological consequences of prompting novice physics students to construct force diagrams

    NASA Astrophysics Data System (ADS)

    Kuo, Eric; Hallinen, Nicole R.; Conlin, Luke D.

    2017-05-01

    One aim of school science instruction is to help students become adaptive problem solvers. Though successful at structuring novice problem solving, step-by-step problem-solving frameworks may also constrain students' thinking. This study utilises a paradigm established by Heckler [(2010). Some consequences of prompting novice physics students to construct force diagrams. International Journal of Science Education, 32(14), 1829-1851] to test how cuing the first step in a standard framework affects undergraduate students' approaches and evaluation of solutions in physics problem solving. Specifically, prompting the construction of a standard diagram before problem solving increases the use of standard procedures, decreasing the use of a conceptual shortcut. Providing a diagram prompt also lowers students' ratings of informal approaches to similar problems. These results suggest that reminding students to follow typical problem-solving frameworks limits their views of what counts as good problem solving.

  1. Systematic Standardized and Individualized Assessment of Masticatory Cycles Using Electromagnetic 3D Articulography and Computer Scripts

    PubMed Central

    Arias, Alain; Lezcano, María Florencia; Saravia, Diego; Dias, Fernando José

    2017-01-01

    Masticatory movements are studied for decades in odontology; a better understanding of them could improve dental treatments. The aim of this study was to describe an innovative, accurate, and systematic method of analyzing masticatory cycles, generating comparable quantitative data. The masticatory cycles of 5 volunteers (Class I, 19 ± 1.7 years) without articular or dental occlusion problems were evaluated using 3D electromagnetic articulography supported by MATLAB software. The method allows the trajectory morphology of the set of chewing cycles to be analyzed from different views and angles. It was also possible to individualize the trajectory of each cycle providing accurate quantitative data, such as number of cycles, cycle areas in frontal view, and the ratio between each cycle area and the frontal mandibular border movement area. There was a moderate negative correlation (−0.61) between the area and the number of cycles: the greater the cycle area, the smaller the number of repetitions. Finally it was possible to evaluate the area of the cycles through time, which did not reveal a standardized behavior. The proposed method provided reproducible, intelligible, and accurate quantitative and graphical data, suggesting that it is promising and may be applied in different clinical situations and treatments. PMID:29075647

  2. Systematic Standardized and Individualized Assessment of Masticatory Cycles Using Electromagnetic 3D Articulography and Computer Scripts.

    PubMed

    Fuentes, Ramón; Arias, Alain; Lezcano, María Florencia; Saravia, Diego; Kuramochi, Gisaku; Dias, Fernando José

    2017-01-01

    Masticatory movements are studied for decades in odontology; a better understanding of them could improve dental treatments. The aim of this study was to describe an innovative, accurate, and systematic method of analyzing masticatory cycles, generating comparable quantitative data. The masticatory cycles of 5 volunteers (Class I, 19 ± 1.7 years) without articular or dental occlusion problems were evaluated using 3D electromagnetic articulography supported by MATLAB software. The method allows the trajectory morphology of the set of chewing cycles to be analyzed from different views and angles. It was also possible to individualize the trajectory of each cycle providing accurate quantitative data, such as number of cycles, cycle areas in frontal view, and the ratio between each cycle area and the frontal mandibular border movement area. There was a moderate negative correlation (-0.61) between the area and the number of cycles: the greater the cycle area, the smaller the number of repetitions. Finally it was possible to evaluate the area of the cycles through time, which did not reveal a standardized behavior. The proposed method provided reproducible, intelligible, and accurate quantitative and graphical data, suggesting that it is promising and may be applied in different clinical situations and treatments.

  3. Hours of sleep in adolescents and its association with anxiety, emotional concerns, and suicidal ideation.

    PubMed

    Sarchiapone, Marco; Mandelli, Laura; Carli, Vladimir; Iosue, Miriam; Wasserman, Camilla; Hadlaczky, Gergö; Hoven, Christina W; Apter, Alan; Balazs, Judit; Bobes, Julio; Brunner, Romuald; Corcoran, Paul; Cosman, Doina; Haring, Christian; Kaess, Michael; Keeley, Helen; Keresztény, Agnes; Kahn, Jean-Pierre; Postuvan, Vita; Mars, Urša; Saiz, Pilar A; Varnik, Peter; Sisask, Merike; Wasserman, Danuta

    2014-02-01

    Anxiety and concerns in daily life may result in sleep problems and consistent evidence suggests that inadequate sleep has several negative consequences on cognitive performance, physical activity, and health. The aim of our study was to evaluate the association between mean hours of sleep per night, psychologic distress, and behavioral concerns. A cross-sectional analysis of the correlation between the number of hours of sleep per night and the Zung Self-rating Anxiety Scale (Z-SAS), the Paykel Suicidal Scale (PSS), and the Strengths and Difficulties Questionnaire (SDQ), was performed on 11,788 pupils (mean age±standard deviation [SD], 14.9±0.9; 55.8% girls) from 11 different European countries enrolled in the SEYLE (Saving and Empowering Young Lives in Europe) project. The mean number of reported hours of sleep per night during school days was 7.7 (SD, ±1.3), with moderate differences across countries (r=0.06; P<.001). A reduced number of sleeping hours (less than the average) was more common in girls (β=0.10 controlling for age) and older pupils (β=0.10 controlling for sex). Reduced sleep was found to be associated with increased scores on SDQ subscales of emotional (β=-0.13) and peer-related problems (β=-0.06), conduct (β=-0.07), total SDQ score (β=-0.07), anxiety (Z-SAS scores, β=-10), and suicidal ideation (PSS, β=-0.16). In a multivariate model including all significant variables, older age, emotional and peer-related problems, and suicidal ideation were the variables most strongly associated with reduced sleep hours, though female gender, conduct problems measured by the SDQ, and anxiety only showed modest effects (β=0.03-0.04). Our study supports evidence that reduced hours of sleep are associated with potentially severe mental health problems in adolescents. Because sleep problems are common among adolescents partly due to maturational processes and changes in sleep patterns, parents, other adults, and adolescents should pay more attention to their sleep patterns and implement interventions, if needed. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Superstitious Beliefs and Problem Gambling Among Thai Lottery Gamblers: The Mediation Effects of Number Search and Gambling Intensity.

    PubMed

    Pravichai, Sunisa; Ariyabuddhiphongs, Vanchai

    2015-12-01

    Thai lottery gamblers won prizes after betting on numbers they obtained from newspaper stories. We hypothesized that Thai lottery gamblers' superstitious beliefs were related to their problem gambling through the mediation of number search and gambling intensity. In a study among 380 Thai lottery gamblers, superstitious beliefs were operationally defined as the beliefs in events or objects that seemed to reveal numbers, number search as an attempt to identify numbers to bet, gambling intensity as the frequency and amounts of lottery gambling, and problem gambling as the symptoms of problems relating to lottery gambling. Results support the hypotheses. There is a statistically significant indirect relationship between Thai lottery gamblers' superstitious beliefs and their problem gambling through the mediation of number search and gambling intensity. Thai lottery gamblers need to be reminded that their superstitious beliefs and number search are precursors of their problem gambling.

  5. Modified expression for bulb-tracer depletion—Effect on argon dating standards

    USGS Publications Warehouse

    Fleck, Robert J.; Calvert, Andrew T.

    2014-01-01

    40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.

  6. A Simulation to Study Speed Distributions in a Solar Plasma

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Alvarellos, Jose Luis

    1999-01-01

    We have carried out a numerical simulation of a plasma with characteristics similar to those found in the core of the Sun. Particular emphasis is placed on the Coulomb interaction between the ions and electrons, which could result in a relative velocity distribution different from the Maxwell-Boltzmann (MB) distribution generally assumed for a plasma. The fact that the distribution may not exactly follow the MB distribution could have very important consequences for a variety of problems in solar physics, especially the neutrino problem. Very briefly, the neutrino problem is that the observed neutrino detections from the Sun are smaller than what the standard solar theory predicts. In Section I we introduce the problem and in section II we discuss the approach to try to solve the problem: i.e., a molecular dynamics approach. In section III we provide details about the integration method, and any simplifications that can be applied to the problem. In section IV (the core of this report) we state our results. First for the specific case of 1000 particles and then for other cases with different number of particles. In section V we summarize our findings and state our conclusions. Sections VI VII and VIII provide the list of figures, reference material and acknowledgements respectively.

  7. Evaluating Usability of Radiology Information Systems in Hospitals of Tabriz University of Medical Sciences.

    PubMed

    Rezaei-Hachesu, Peyman; Pesianian, Esmaeil; Mohammadian, Mohsen

    2016-02-01

    Radiology information system (RIS) in order to reduce workload and improve the quality of services must be well-designed. Heuristic evaluation is one of the methods that understand usability problems with the least time, cost and resources. The aim of present study is to evaluate the usability of RISs in hospitals. This is a cross-sectional descriptive study (2015) that uses heuristic evaluation method to evaluate the usability of RIS used in 3 hospitals of Tabriz city. The data are collected using a standard checklist based on 13 principles of Nielsen Heuristic evaluation method. Usability of RISs was investigated based on the number of components observed from Nielsen principles and problems of usability based on the number of non-observed components as well as non-existent or unrecognizable components. by evaluation of RISs in each of the hospitals 1, 2 and 3, total numbers of observed components were obtained as 173, 202 and 196, respectively. It was concluded that the usability of RISs in the studied population, on average and with observing 190 components of the 291 components related to the 13 principles of Nielsen is 65.41 %. Furthermore, problems of usability were obtained as 26.35%. The established and visible nature of some components such as response time of application, visual feedbacks, colors, view and design and arrangement of software objects cause more attention to these components as principal components in designing UI software. Also, incorrect analysis before system design leads to a lack of attention to secondary needs like Help software and security issues.

  8. 3D hierarchical interface-enriched finite element method: Implementation and applications

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Ahmadian, Hossein

    2015-10-01

    A hierarchical interface-enriched finite element method (HIFEM) is proposed for the mesh-independent treatment of 3D problems with intricate morphologies. The HIFEM implements a recursive algorithm for creating enrichment functions that capture gradient discontinuities in nonconforming finite elements cut by arbitrary number and configuration of materials interfaces. The method enables the mesh-independent simulation of multiphase problems with materials interfaces that are in close proximity or contact while providing a straightforward general approach for evaluating the enrichments. In this manuscript, we present a detailed discussion on the implementation issues and required computational geometry considerations associated with the HIFEM approximation of thermal and mechanical responses of 3D problems. A convergence study is provided to investigate the accuracy and convergence rate of the HIFEM and compare them with standard FEM benchmark solutions. We will also demonstrate the application of this mesh-independent method for simulating the thermal and mechanical responses of two composite materials systems with complex microstructures.

  9. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    NASA Astrophysics Data System (ADS)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  10. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  11. Heritage House Maintenance Using 3d City Model Application Domain Extension Approach

    NASA Astrophysics Data System (ADS)

    Mohd, Z. H.; Ujang, U.; Liat Choon, T.

    2017-11-01

    Heritage house is part of the architectural heritage of Malaysia that highly valued. Many efforts by the Department of Heritage to preserve this heritage house such as monitoring the damage problems of heritage house. The damage problems of heritage house might be caused by wooden decay, roof leakage and exfoliation of wall. One of the initiatives for maintaining and documenting this heritage house is through Three-dimensional (3D) of technology. 3D city models are widely used now and much used by researchers for management and analysis. CityGML is a standard tool that usually used by researchers to exchange, storing and managing virtual 3D city models either geometric and semantic information. Moreover, it also represent multi-scale of 3D model in five level of details (LoDs) whereby each of level give a distinctive functions. The extension of CityGML was recently introduced and can be used for problems monitoring and the number of habitants of a house.

  12. Sustainable knowledge development across cultural boundaries: Experiences from the EU-project SILMAS (Toolbox for conflict solving instruments in Alpine Lake Management)

    NASA Astrophysics Data System (ADS)

    Fegerl, Michael; Wieden, Wilfried

    2013-04-01

    Increasingly people have to communicate knowledge across cultural and language boundaries. Even though recent technologies offer powerful communication facilities people often feel confronted with barriers which clearly reduce their chances of making their interaction a success. Concrete evidence concerning such problems derives from a number of projects, where generated knowledge often results in dead-end products. In the Alpine Space-project SILMAS (Sustainable Instruments for Lake Management in Alpine Space), in which both authors were involved, a special approach (syneris® ) was taken to avoid this problem and to manage project knowledge in sustainable form. Under this approach knowledge input and output are handled interactively: Relevant knowledge can be developed continuously and users can always access the latest state of expertise. Resort to the respective tools and procedures can also assist in closing knowledge gaps and in developing innovative responses to familiar or novel problems. This contribution intends to describe possible ways and means which have been found to increase the chances of success of knowledge communication across cultural boundaries. The process of trans-cultural discussions of experts to find a standardized solution is highlighted as well as the problem of dissemination of expert knowledge to variant stakeholders. Finally lessons learned are made accessible, where a main task lies in the creation of a tool box for conflict solving instruments, as a demonstrable result of the project and for the time thereafter. The interactive web-based toolbox enables lake managers to access best practice instruments in standardized, explicit and cross-linguistic form.

  13. The quality of paper-based versus electronic nursing care plan in Australian aged care homes: A documentation audit study.

    PubMed

    Wang, Ning; Yu, Ping; Hailey, David

    2015-08-01

    The nursing care plan plays an essential role in supporting care provision in Australian aged care. The implementation of electronic systems in aged care homes was anticipated to improve documentation quality. Standardized nursing terminologies, developed to improve communication and advance the nursing profession, are not required in aged care practice. The language used by nurses in the nursing care plan and the effect of the electronic system on documentation quality in residential aged care need to be investigated. To describe documentation practice for the nursing care plan in Australian residential aged care homes and to compare the quantity and quality of documentation in paper-based and electronic nursing care plans. A nursing documentation audit was conducted in seven residential aged care homes in Australia. One hundred and eleven paper-based and 194 electronic nursing care plans, conveniently selected, were reviewed. The quantity of documentation in a care plan was determined by the number of phrases describing a resident problem and the number of goals and interventions. The quality of documentation was measured using 16 relevant questions in an instrument developed for the study. There was a tendency to omit 'nursing problem' or 'nursing diagnosis' in the nursing process by changing these terms (used in the paper-based care plan) to 'observation' in the electronic version. The electronic nursing care plan documented more signs and symptoms of resident problems and evaluation of care than the paper-based format (48.30 vs. 47.34 out of 60, P<0.01), but had a lower total mean quality score. The electronic care plan contained fewer problem or diagnosis statements, contributing factors and resident outcomes than the paper-based system (P<0.01). Both types of nursing care plan were weak in documenting measurable and concrete resident outcomes. The overall quality of documentation content for the nursing process was no better in the electronic system than in the paper-based system. Omission of the nursing problem or diagnosis from the nursing process may reflect a range of factors behind the practice that need to be understood. Further work is also needed on qualitative aspects of the nurse care plan, nurses' attitudes towards standardized terminologies and the effect of different documentation practice on care quality and resident outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Psychosocial interventions to reduce alcohol consumption in concurrent problem alcohol and illicit drug users.

    PubMed

    Klimas, Jan; Tobin, Helen; Field, Catherine-Anne; O'Gorman, Clodagh S M; Glynn, Liam G; Keenan, Eamon; Saunders, Jean; Bury, Gerard; Dunne, Colum; Cullen, Walter

    2014-12-03

    Problem alcohol use is common among illicit drug users and is associated with adverse health outcomes. It is also an important factor contributing to a poor prognosis among drug users with hepatitis C virus (HCV) as it impacts on progression to hepatic cirrhosis or opiate overdose in opioid users. To assess the effects of psychosocial interventions for problem alcohol use in illicit drug users (principally problem drug users of opiates and stimulants). We searched the Cochrane Drugs and Alcohol Group trials register (June 2014), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library, Issue 11, June 2014), MEDLINE (1966 to June 2014); EMBASE (1974 to June 2014); CINAHL (1982 to June 2014); PsycINFO (1872 to June 2014) and the reference lists of eligible articles. We also searched: 1) conference proceedings (online archives only) of the Society for the Study of Addiction, International Harm Reduction Association, International Conference on Alcohol Harm Reduction and American Association for the Treatment of Opioid Dependence; 2) online registers of clinical trials: Current Controlled Trials, Clinical Trials.org, Center Watch and the World Health Organization International Clinical Trials Registry Platform. Randomised controlled trials comparing psychosocial interventions with another therapy (other psychosocial treatment, including non-pharmacological therapies, or placebo) in adult (over the age of 18 years) illicit drug users with concurrent problem alcohol use. We used the standard methodological procedures expected by The Cochrane Collaboration. Four studies, involving 594 participants, were included. Half of the trials were rated as having a high or unclear risk of bias. The studies considered six different psychosocial interventions grouped into four comparisons: (1) cognitive-behavioural coping skills training versus 12-step facilitation (one study; 41 participants), (2) brief intervention versus treatment as usual (one study; 110 participants), (3) group or individual motivational interviewing (MI) versus hepatitis health promotion (one study; 256 participants) and (4) brief motivational intervention (BMI) versus assessment-only (one study; 187 participants). Differences between studies precluded any data pooling. Findings are described for each trial individually.Comparison 1: low-quality evidence; no significant difference for any of the outcomes considered Alcohol abstinence as maximum number of weeks of consecutive alcohol abstinence during treatment: mean difference (MD) 0.40 (95% confidence interval (CI) -1.14 to 1.94); illicit drug abstinence as maximum number of weeks of consecutive abstinence from cocaine during treatment: MD 0.80 (95% CI -0.70 to 2.30); alcohol abstinence as number achieving three or more weeks of consecutive alcohol abstinence during treatment: risk ratio (RR) 1.96 (95% CI 0.43 to 8.94); illicit drug abstinence as number achieving three or more weeks of consecutive abstinence from cocaine during treatment: RR 1.10 (95% CI 0.42 to 2.88); alcohol abstinence during follow-up year: RR 2.38 (95% CI 0.10 to 55.06); illicit drug abstinence as abstinence from cocaine during follow-up year: RR 0.39 (95% CI 0.04 to 3.98), moderate-quality evidence.Comparison 2: low-quality evidence, no significant difference for all the outcomes considered Alcohol use as AUDIT scores at three months: MD 0.80 (95% -1.80 to 3.40); alcohol use as AUDIT scores at nine months: MD 2.30 (95% CI -0.58 to 5.18); alcohol use as number of drinks per week at three months: MD 0.70 (95% CI -3.85 to 5.25); alcohol use as number of drinks per week at nine months: MD -0.30 (95% CI -4.79 to 4.19); alcohol use as decreased alcohol use at three months: RR 1.13 (95% CI 0.67 to 1.93); alcohol use as decreased alcohol use at nine months: RR 1.34 (95% CI 0.69 to 2.58), moderate-quality evidence.Comparison 3 (group and individual MI), low-quality evidence: no significant difference for all outcomes Group MI: number of standard drinks consumed per day over the past month: MD -0.40 (95% CI -2.03 to 1.23); frequency of drug use: MD 0.00 (95% CI -0.03 to 0.03); composite drug score (frequency*severity for all drugs taken): MD 0.00 (95% CI -0.42 to 0.42); greater than 50% reduction in number of standard drinks consumed per day over the last 30 days: RR 1.10 (95% CI 0.82 to 1.48); abstinence from alcohol over the last 30 days: RR 0.88 (95% CI 0.49 to 1.58).Individual MI: number of standard drinks consumed per day over the past month: MD -0.10 (95% CI -1.89 to 1.69); frequency of drug use (as measured using the Addiction Severity Index (ASI drug): MD 0.00 (95% CI -0.03 to 0.03); composite drug score (frequency*severity for all drugs taken): MD -0.10 (95% CI -0.46 to 0.26); greater than 50% reduction in number of standard drinks consumed per day over the last 30 days: RR 0.92 (95% CI 0.68 to 1.26); abstinence from alcohol over the last 30 days: RR 0.97 (95% CI 0.56 to 1.67).Comparison 4: more people reduced alcohol use (by seven or more days in the past month at 6 months) in the BMI group than in the control group (RR 1.67; 95% CI 1.08 to 2.60), moderate-quality evidence. No significant difference was reported for all other outcomes: number of days in the past 30 days with alcohol use at one month: MD -0.30 (95% CI -3.38 to 2.78); number of days in the past month with alcohol use at six months: MD -1.50 (95% CI -4.56 to 1.56); 25% reduction of drinking days in the past month: RR 1.23 (95% CI 0.96 to 1.57); 50% reduction of drinking days in the past month: RR 1.27 (95% CI 0.96 to 1.68); 75% reduction of drinking days in the past month: RR 1.21 (95% CI 0.84 to 1.75); one or more drinking days' reduction in the past month: RR 1.12 (95% CI 0.91 to 1.38). There is low-quality evidence to suggest that there is no difference in effectiveness between different types of interventions to reduce alcohol consumption in concurrent problem alcohol and illicit drug users and that brief interventions are not superior to assessment-only or to treatment as usual. No firm conclusions can be made because of the paucity of the data and the low quality of the retrieved studies.

  15. I. Aspects of the Dark Matter Problem. II. Fermion Balls

    NASA Astrophysics Data System (ADS)

    Tetradis, Nikolaos Athanassiou

    The first part of this thesis deals with the dark matter problem. A simple non-supersymmetric extension of the standard model is presented, which provides dark matter candidates not excluded by the existing dark matter searches. The simplest candidate is the neutral component of a zero hypercharge triplet, with vector gauge interactions. The upper bound on its mass is a few TeV. We also discuss possible modifications of the standard freeze-out scenario, induced by the presence of a phase transition. More specifically, if the critical temperature of the electroweak phase transition is sufficiently small, it can change the final abundances of heavy dark matter particles, by keeping them massless for a long time. Recent experimental bounds on the Higgs mass from LEP imply that this is not the case in the minimal standard model. In the second part we discuss non-trivial configurations, involving fermions which obtain their mass through Yukawa interactions with a scalar field. Under certain conditions, the vacuum expectation value of the scalar field is shifted from the minimum of the effective potential, in regions of high fermion density. This may result in the formation of fermion bound states. We study two such cases: (a) Using the non-linear SU(3)L times SU(3)R chiral Lagrangian coupled to a field theory of nuclear forces, we show that a bound state of baryons with a well defined surface may concievably form in the presence of kaon condensation. This state is of similar density to ordinary nuclei, but has net strangeness equal to about two thirds the baryon number. We discuss the properties of lumps of strange baryon matter with baryon number between ~20 and ~10 57 where gravitational effects become important. (b) The Higgs field near a very heavy top quark or any other heavy fermion is expected to be significantly deformed. By computing explicit solutions of the classical equations of motion for a spherically symmetric configuration without gauge fields, we show that in the standard model this cannot happen without violating either vacuum stability or perturbation theory at energies very close to the top quark mass.

  16. Intensive motivational interviewing for women with concurrent alcohol problems and methamphetamine dependence.

    PubMed

    Korcha, Rachael A; Polcin, Douglas L; Evans, Kristy; Bond, Jason C; Galloway, Gantt P

    2014-02-01

    Motivational interviewing (MI) for the treatment of alcohol and drug problems is typically conducted over 1 to 3 sessions. The current work evaluates an intensive 9-session version of MI (Intensive MI) compared to a standard single MI session (Standard MI) using 163 methamphetamine (MA) dependent individuals. The primary purpose of this paper is to report the unexpected finding that women with co-occurring alcohol problems in the Intensive MI condition reduced the severity of their alcohol problems significantly more than women in the Standard MI condition at the 6-month follow-up. Stronger perceived alliance with the therapist was inversely associated with alcohol problem severity scores. Findings indicate that Intensive MI is a beneficial treatment for alcohol problems among women with MA dependence. © 2013.

  17. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  18. Standardization of quantum key distribution and the ETSI standardization initiative ISG-QKD

    NASA Astrophysics Data System (ADS)

    Länger, Thomas; Lenhart, Gaby

    2009-05-01

    In recent years, quantum key distribution (QKD) has been the object of intensive research activities and of rapid progress, and it is now developing into a competitive industry with commercial products. Once QKD systems are transferred from the controlled environment of physical laboratories into a real-world environment for practical use, a number of practical security, compatibility and connectivity issues need to be resolved. In particular, comprehensive security evaluation and watertight security proofs need to be addressed to increase trust in QKD. System interoperability with existing infrastructures and applications as well as conformance with specific user requirements have to be assured. Finding common solutions to these problems involving all actors can provide an advantage for the commercialization of QKD as well as for further technological development. The ETSI industry specification group for QKD (ISG-QKD) offers a forum for creating such universally accepted standards and will promote significant leverage effects on coordination, cooperation and convergence in research, technical development and business application of QKD.

  19. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    PubMed Central

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  20. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  1. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    PubMed

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  2. Air Pollution over the States

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1972

    1972-01-01

    State plans for implementing air quality standards are evaluated together with problems in modeling procedures and enforcement. Monitoring networks, standards, air quality regions, and industrial problems are also discussed. (BL)

  3. SGML-Based Markup for Literary Texts: Two Problems and Some Solutions.

    ERIC Educational Resources Information Center

    Barnard, David; And Others

    1988-01-01

    Identifies the Standard Generalized Markup Language (SGML) as the best basis for a markup standard for encoding literary texts. Outlines solutions to problems using SGML and discusses the problem of maintaining multiple views of a document. Examines several ways of reducing the burden of markups. (GEA)

  4. Magnetic Reconnection and Particle Acceleration in the Solar Corona

    NASA Astrophysics Data System (ADS)

    Neukirch, Thomas

    Reconnection plays a major role for the magnetic activity of the solar atmosphere, for example solar flares. An interesting open problem is how magnetic reconnection acts to redistribute the stored magnetic energy released during an eruption into other energy forms, e.g. gener-ating bulk flows, plasma heating and non-thermal energetic particles. In particular, finding a theoretical explanation for the observed acceleration of a large number of charged particles to high energies during solar flares is presently one of the most challenging problems in solar physics. One difficulty is the vast difference between the microscopic (kinetic) and the macro-scopic (MHD) scales involved. Whereas the phenomena observed to occur on large scales are reasonably well explained by the so-called standard model, this does not seem to be the case for the small-scale (kinetic) aspects of flares. Over the past years, observations, in particular by RHESSI, have provided evidence that a naive interpretation of the data in terms of the standard solar flare/thick target model is problematic. As a consequence, the role played by magnetic reconnection in the particle acceleration process during solar flares may have to be reconsidered.

  5. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  6. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    PubMed

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  7. A design study to develop young children's understanding of multiplication and division

    NASA Astrophysics Data System (ADS)

    Bicknell, Brenda; Young-Loveridge, Jenny; Nguyen, Nhung

    2016-12-01

    This design study investigated the use of multiplication and division problems to help 5-year-old children develop an early understanding of multiplication and division. One teacher and her class of 15 5-year-old children were involved in a collaborative partnership with the researchers. The design study was conducted over two 4-week periods in May-June and October-November. The focus in this article is on three key aspects of classroom teaching: instructional tasks, the use of representations, and discourse, including the mathematics register. Results from selected pre- and post-assessment tasks within a diagnostic interview showed that there were improvements in addition and subtraction as well as multiplication and division, even though the teaching had used multiplication and division problems. Students made progress on all four operational domains, with effect sizes ranging from approximately two thirds of a standard deviation to 2 standard deviations. Most of the improvement in students' number strategies was in moving from `counting all' to `counting on' and `skip counting'. The findings challenge the idea that learning experiences in addition and subtraction should precede those in multiplication and division as suggested in some curriculum documents.

  8. A minimal scale invariant axion solution to the strong CP-problem

    NASA Astrophysics Data System (ADS)

    Tokareva, Anna

    2018-05-01

    We present a scale-invariant extension of the Standard model allowing for the Kim-Shifman-Vainstein-Zakharov (KSVZ) axion solution of the strong CP problem in QCD. We add the minimal number of new particles and show that the Peccei-Quinn scalar might be identified with the complex dilaton field. Scale invariance, together with the Peccei-Quinn symmetry, is broken spontaneously near the Planck scale before inflation, which is driven by the Standard Model Higgs field. We present a set of general conditions which makes this scenario viable and an explicit example of an effective theory possessing spontaneous breaking of scale invariance. We show that this description works both for inflation and low-energy physics in the electroweak vacuum. This scenario can provide a self-consistent inflationary stage and, at the same time, successfully avoid the cosmological bounds on the axion. Our general predictions are the existence of colored TeV mass fermion and the QCD axion. The latter has all the properties of the KSVZ axion but does not contribute to dark matter. This axion can be searched via its mixing to a photon in an external magnetic field.

  9. Hopping in the Crowd to Unveil Network Topology.

    PubMed

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-13

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  10. Hopping in the Crowd to Unveil Network Topology

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-01

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  11. Inconsistency in the Reporting of Adverse Events in Total Ankle Arthroplasty: A Systematic Review of the Literature.

    PubMed

    Mercer, Jeff; Penner, Murray; Wing, Kevin; Younger, Alastair S E

    2016-02-01

    Systems for classifying complications have been proposed for many surgical subspecialties. The goal of this systematic review was to analyze the number and frequency of different terms used to identify complications in total ankle arthroplasty. We hypothesized that this terminology would be highly variable, supporting a need for a standardized system of reporting. Studies that met predefined inclusion/exclusion criteria were analyzed to identify terminology used to describe adverse events. All terms were then tabulated and quantified with regard to diversity and frequency of use across all included studies. Terms were also grouped into 10 categories, and the number of reported occurrences of each adverse event was calculated. A reporting tool was then developed. Of 572 unique terms used to describe adverse outcomes in 117 studies, 55.9% (320/572) were used in only a single study. The category that was most frequently reported was revision surgery, with 86% of papers reporting on this event using 115 different terms. Other categories included "additional non-revision surgeries" (74% of papers, 93 terms), "loosening/osteolysis" (63% of papers, 86 terms), "fractures" (60% of papers, 53 terms), "wound problems" (52% of papers, 27 terms), "infection" (52% of papers, 27 terms), "implant problems" (50% of papers, 57 terms), "soft tissue injuries" (31% of papers, 30 terms), "heterotopic ossification" (22% of papers, 17 terms), and "pain" (18% of papers, 11 terms). The reporting of complications and adverse outcomes for total ankle arthroplasty was highly variable. This lack of consistency impedes the accurate reporting and interpretation of data required for the development of cohesive, evidence-based treatment guidelines for end-stage ankle arthritis. Standardized reporting tools are urgently needed. This study presents a prototype worksheet for the standardized assessment and reporting of adverse events. Level-III, decision analyses, systematic review of Level III studies and above. © The Author(s) 2015.

  12. Why good accountants do bad audits.

    PubMed

    Bazerman, Max H; Loewenstein, George; Moore, Don A

    2002-11-01

    On July 30, President Bush signed into law the Sarbanes-Oxley Act addressing corporate accountability. A response to recent financial scandals, the law tightened federal controls over the accounting industry and imposed tough new criminal penalties for fraud. The president proclaimed, "The era of low standards and false profits is over." If only it were that easy. The authors don't think corruption is the main cause of bad audits. Rather, they claim, the problem is unconscious bias. Without knowing it, we all tend to discount facts that contradict the conclusions we want to reach, and we uncritically embrace evidence that supports our positions. Accountants might seem immune to such distortions because they work with seemingly hard numbers and clear-cut standards. But the corporate-auditing arena is particularly fertile ground for self-serving biases. Because of the often subjective nature of accounting and the close relationships between accounting firms and their corporate clients, even the most honest and meticulous of auditors can unintentionally massage the numbers in ways that mask a company's true financial status, thereby misleading investors, regulators, and even management. Solving this problem will require far more aggressive action than the U.S. government has taken thus far. What's needed are practices and regulations that recognize the existence of bias and moderate its effects. True auditor independence will entail fundamental changes to the way the accounting industry operates, including full divestiture of consulting and tax services, rotation of auditing firms, and fixed-term contracts that prohibit client companies from firing their auditors. Less tangibly, auditors must come to appreciate the profound impact of self-serving biases on their judgment.

  13. On multigrid solution of the implicit equations of hydrodynamics. Experiments for the compressible Euler equations in general coordinates

    NASA Astrophysics Data System (ADS)

    Kifonidis, K.; Müller, E.

    2012-08-01

    Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.

  14. Aerodynamic coefficients in generalized unsteady thin airfoil theory

    NASA Technical Reports Server (NTRS)

    Williams, M. H.

    1980-01-01

    Two cases are considered: (1) rigid body motion of an airfoil-flap combination consisting of vertical translation of given amplitude, rotation of given amplitude about a specified axis, and rotation of given amplitude of the control surface alone about its hinge; the upwash for this problem is defined mathematically; and (2) sinusoidal gust of given amplitude and wave number, for which the upwash is defined mathematically. Simple universal formulas are presented for the most important aerodynamic coefficients in unsteady thin airfoil theory. The lift and moment induced by a generalized gust are evaluated explicitly in terms of the gust wavelength. Similarly, in the control surface problem, the lift, moment, and hinge moments are given as explicit algebraic functions of hinge location. These results can be used together with any of the standard numerical inversion routines for the elementary loads (pitch and heave).

  15. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  16. Inverse Scattering and Local Observable Algebras in Integrable Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Alazzawi, Sabina; Lechner, Gandalf

    2017-09-01

    We present a solution method for the inverse scattering problem for integrable two-dimensional relativistic quantum field theories, specified in terms of a given massive single particle spectrum and a factorizing S-matrix. An arbitrary number of massive particles transforming under an arbitrary compact global gauge group is allowed, thereby generalizing previous constructions of scalar theories. The two-particle S-matrix S is assumed to be an analytic solution of the Yang-Baxter equation with standard properties, including unitarity, TCP invariance, and crossing symmetry. Using methods from operator algebras and complex analysis, we identify sufficient criteria on S that imply the solution of the inverse scattering problem. These conditions are shown to be satisfied in particular by so-called diagonal S-matrices, but presumably also in other cases such as the O( N)-invariant nonlinear {σ}-models.

  17. Behavior problems and placement change in a national child welfare sample: a prospective study.

    PubMed

    Aarons, Gregory A; James, Sigrid; Monn, Amy R; Raghavan, Ramesh; Wells, Rebecca S; Leslie, Laurel K

    2010-01-01

    There is ongoing debate regarding the impact of youth behavior problems on placement change in child welfare compared to the impact of placement change on behavior problems. Existing studies provide support for both perspectives. The purpose of this study was to prospectively examine the relations of behavior problems and placement change in a nationally representative sample of youths in the National Survey of Child and Adolescent Well-Being. The sample consisted of 500 youths in the child welfare system with out-of-home placements over the course of the National Survey of Child and Adolescent Well-Being study. We used a prospective cross-lag design and path analysis to examine reciprocal effects of behavior problems and placement change, testing an overall model and models examining effects of age and gender. In the overall model, out of a total of eight path coefficients, behavior problems significantly predicted placement changes for three paths and placement change predicted behavior problems for one path. Internalizing and externalizing behavior problems at baseline predicted placement change between baseline and 18 months. Behavior problems at an older age and externalizing behavior at 18 months appear to confer an increased risk of placement change. Of note, among female subjects, placement changes later in the study predicted subsequent internalizing and externalizing behavior problems. In keeping with recommendations from a number of professional bodies, we suggest that initial and ongoing screening for internalizing and externalizing behavior problems be instituted as part of standard practice for youths entering or transitioning in the child welfare system.

  18. CLIPSITS - CLIPS INTELLIGENT TUTORING SYSTEM

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    The CLIPS Intelligent Tutoring System (CLIPSITS) is designed to be used to learn CLIPS, the C-language Integrated Production System expert system shell developed by the Software Technology Branch at Johnson Space Center. The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. This version of CLIPSITS is compatible with the Version 4.2 and 4.3 CLIPS User's Guide. However, the program does not cover any new features of CLIPS v4.3 that were added since the release of v4.2. The chapter numbers in the CLIPS User's Guide correspond directly with the lesson numbers in CLIPSITS. Each lesson in the program contains anywhere from 1 to 10 problems. Most of these have multiple parts. The student is given a subset of these problems from each lesson to work. The actual number of problems presented depends on how well the student masters the previous problem(s). The progression through these lessons is maintained in a personalized file under the student's name. As with most computer languages, there is usually more than one way to solve a problem. CLIPSITS attempts to be as flexible as possible and to allow as many correct solutions as possible. CLIPSITS gives the student the option of setting his/her own colors for the screen interface and the option of redefining special keystroke combinations used within the program. CLIPSITS requires an IBM PC compatible with 640K RAM and optional 2 or 3 button mouse. A 286- or 386-based machine is preferable. Performance will be somewhat slower on an XT class machine. The program must be installed on a hard disk with 825 KB space available. The program was developed in 1989. The standard distribution media is three 5.25" IBM PC DOS format diskettes. The program is also sold bundled with CLIPS for a special combined price as COS-10025. NOTE: Only the executable code is distributed. Supporting documentation is included on the diskettes. IBM, IBM PC and XT are registered trademarks of International Business Machines Corporation.

  19. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  20. Extensibility and limitations of FDDI

    NASA Technical Reports Server (NTRS)

    Game, David; Maly, Kurt J.

    1990-01-01

    Recently two standards for Metropolitan Area Networks (MANs), Fiber Distributed Data Interface (FDDI) and Distributed Queue Dual Bus (DQDB), have emerged as the primary competitors for the MAN arena. Great interest exists in building higher speed networks which support large numbers of node and greater distance, and it is not clear what types of protocols are needed for this type of environment. There is some question as to whether or not these MAN standards can be extended to such environments. The extensibility of FDDI to the Gbps range and a long distance environment is investigated. Specification parameters which affect performance are shown and a measure is provided for predicting utilization of FDDI. A comparison of FDDI at 100 Mbps and 1 Gbps is presented. Some specific problems with FDDI are addressed and modifications which improve the viability of FDDI in such high speed networks are investigated.

  1. A New High-Order Spectral Difference Method for Simulating Viscous Flows on Unstructured Grids with Mixed Elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mao; Qiu, Zihua; Liang, Chunlei

    In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method ismore » stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.« less

  2. Mechanism for thermal relic dark matter of strongly interacting massive particles.

    PubMed

    Hochberg, Yonit; Kuflik, Eric; Volansky, Tomer; Wacker, Jay G

    2014-10-24

    We present a new paradigm for achieving thermal relic dark matter. The mechanism arises when a nearly secluded dark sector is thermalized with the standard model after reheating. The freeze-out process is a number-changing 3→2 annihilation of strongly interacting massive particles (SIMPs) in the dark sector, and points to sub-GeV dark matter. The couplings to the visible sector, necessary for maintaining thermal equilibrium with the standard model, imply measurable signals that will allow coverage of a significant part of the parameter space with future indirect- and direct-detection experiments and via direct production of dark matter at colliders. Moreover, 3→2 annihilations typically predict sizable 2→2 self-interactions which naturally address the "core versus cusp" and "too-big-to-fail" small-scale structure formation problems.

  3. Rational design of gold nanoparticle toxicology assays: a question of exposure scenario, dose and experimental setup.

    PubMed

    Taylor, Ulrike; Rehbock, Christoph; Streich, Carmen; Rath, Detlef; Barcikowski, Stephan

    2014-09-01

    Many studies have evaluated the toxicity of gold nanoparticles, although reliable predictions based on these results are rare. In order to overcome this problem, this article highlights strategies to improve comparability and standardization of nanotoxicological studies. To this end, it is proposed that we should adapt the nanomaterial to the addressed exposure scenario, using ligand-free nanoparticle references in order to differentiate ligand effects from size effects. Furthermore, surface-weighted particle dosing referenced to the biologically relevant parameter (e.g., cell number or organ mass) is proposed as the gold standard. In addition, it is recommended that we should shift the focus of toxicological experiments from 'live-dead' assays to the assessment of cell function, as this strategy allows observation of bioresponses at lower doses that are more relevant for in vivo scenarios.

  4. Chameleon field dynamics during inflation

    NASA Astrophysics Data System (ADS)

    Saba, Nasim; Farhoudi, Mehrdad

    By studying the chameleon model during inflation, we investigate whether it can be a successful inflationary model, wherein we employ the common typical potential usually used in the literature. Thus, in the context of the slow-roll approximations, we obtain the e-folding number for the model to verify the ability of resolving the problems of standard big bang cosmology. Meanwhile, we apply the constraints on the form of the chosen potential and also on the equation of state parameter coupled to the scalar field. However, the results of the present analysis show that there is not much chance of having the chameleonic inflation. Hence, we suggest that if through some mechanism the chameleon model can be reduced to the standard inflationary model, then it may cover the whole era of the universe from the inflation up to the late time.

  5. An Improved Quantitative Real-Time PCR Assay for the Enumeration of Heterosigma akashiwo (Raphidophyceae) Cysts Using a DNA Debris Removal Method and a Cyst-Based Standard Curve.

    PubMed

    Kim, Joo-Hwan; Kim, Jin Ho; Wang, Pengbin; Park, Bum Soo; Han, Myung-Soo

    2016-01-01

    The identification and quantification of Heterosigma akashiwo cysts in sediments by light microscopy can be difficult due to the small size and morphology of the cysts, which are often indistinguishable from those of other types of algae. Quantitative real-time PCR (qPCR) based assays represent a potentially efficient method for quantifying the abundance of H. akashiwo cysts, although standard curves must be based on cyst DNA rather than on vegetative cell DNA due to differences in gene copy number and DNA extraction yield between these two cell types. Furthermore, qPCR on sediment samples can be complicated by the presence of extracellular DNA debris. To solve these problems, we constructed a cyst-based standard curve and developed a simple method for removing DNA debris from sediment samples. This cyst-based standard curve was compared with a standard curve based on vegetative cells, as vegetative cells may have twice the gene copy number of cysts. To remove DNA debris from the sediment, we developed a simple method involving dilution with distilled water and heating at 75°C. A total of 18 sediment samples were used to evaluate this method. Cyst abundance determined using the qPCR assay without DNA debris removal yielded results up to 51-fold greater than with direct counting. By contrast, a highly significant correlation was observed between cyst abundance determined by direct counting and the qPCR assay in conjunction with DNA debris removal (r2 = 0.72, slope = 1.07, p < 0.001). Therefore, this improved qPCR method should be a powerful tool for the accurate quantification of H. akashiwo cysts in sediment samples.

  6. Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication

    NASA Astrophysics Data System (ADS)

    Kadlec, J.

    2013-12-01

    The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.

  7. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  8. Mission Mathematics: Linking Aerospace and the NCTM Standards, K-6.

    ERIC Educational Resources Information Center

    Hynes, Mary Ellen, Ed.

    This book is designed to present mathematical problems and tasks that focus on the National Council of Teachers of Mathematics (NCTM) curriculum and evaluation standards in the context of aerospace activities. It aims at actively engaging students in NCTM's four process standards: (1) problem solving; (2) mathematical reasoning; (3) communicating…

  9. ISO WD 1856. Guideline for radiation exposure of nonmetallic materials. Present status

    NASA Astrophysics Data System (ADS)

    Briskman, B. A.

    In the framework of the International Organization for Standardization (ISO) activity we started development of international standard series for space environment simulation at on-ground tests of materials. The proposal was submitted to ISO Technical Committee 20 (Aircraft and Space Vehicles), Subcommittee 14 (Space Systems and Operations) and was approved as Working Draft 15856 at the Los-Angeles meeting (1997). A draft of the first international standard "Space Environment Simulation for Radiation Tests of Materials" (1st version) was presented at the 7th International Symposium on Materials in Space Environment (Briskman et al, 1997). The 2nd version of the standard was limited to nonmetallic materials and presented at the 20th Space Simulation Conference (Briskman and Borson, 1998). It covers the testing of nonmetallic materials embracing also polymer composite materials including metal components (metal matrix composites) to simulated space radiation. The standard does not cover semiconductor materials. The types of simulated radiation include charged particles (electrons and protons), solar ultraviolet radiation, and soft X-radiation of solar flares. Synergistic interactions of the radiation environment are covered only for these natural and some induced environmental effects. This standard outlines the recommended methodology and practices for the simulation of space radiation on materials. Simulation methods are used to reproduce the effects of the space radiation environment on materials that are located on surfaces of space vehicles and behind shielding. It was discovered that the problem of radiation environment simulation is very complex and the approaches of different specialists and countries to the problem are sometimes quite opposite. To the present moment we developed seven versions of the standard. The last version is a compromise between these approaches. It was approved at the last ISO TC20/SC14/WG4 meeting in Houston, October 2002. At a splinter meeting of Int. Conference on Materials in a Space Environment, Noordwijk, Netherlands, ESA, June 2003, the experts from ESA, USA, France, Russia and Japan discussed the last version of the draft and approved it with a number of notes. A revised version of the standard will be presented this May at ISO TC20/SC14 meeting in Russia.

  10. On Making a Distinguished Vertex Minimum Degree by Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes

    For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.

  11. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  12. [Topical problems of social policy with regard to young families].

    PubMed

    Gerdzhikova, T

    1990-01-01

    The problem of housing is in the focus of the difficulties of young families. The subjective and objective factors cause inadequate effectiveness of housing legislation for young families. A committee in charge of housing policy has had serious shortcomings in lawmaking. The law spelling out who is entitled to an apartment applied to a 4-member family, however, this definition could also refer to 2 mother with 3 children (sometimes from different biological fathers). Objective factors of the inadequacies include the insufficient number of apartments and the low quality of newly constructed housing. The last accounting of the housing stock of Bulgaria made in December 1985 showed that despite positive results of housing construction the available number of units did not meet the demand. Bulgaria falls short of the number of apartments/1000 population in comparison to developed countries, and this housing stock does not correspond to contemporary standards. There is also a lack of qualified workers in the industry. The material security and welfare of young families is another concern. 82.2% of young families have 1-2 children compared 26.4% of the rest of the families. About 80% of young families receive help from their parents. The general appraisal of the effectiveness of legislation for young households indicates that women under 30 gave birth to the majority of children born. 124,582 live births occurred in 1967; 149,196 births in 1974; and only 122,303 births in 1984. Mothers up to the age of 30 were responsible for 105,757 births in 1967; 132,006 birhs in 1974; and 95,593 births in 1984. Marital fertility increased during 1967-74 among women aged 15-30 as a result of a pronatalist policy in existence during 1967-73, but a reversal was apparent in the following years because of the decline of the living standard of young families.

  13. Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations

    NASA Astrophysics Data System (ADS)

    Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans

    2017-01-01

    Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.

  14. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    NASA Astrophysics Data System (ADS)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient advanced quantum-computation techniques are developed, they nevertheless provide a valid baseline for research targeting a reduction of the algorithmic-level resource requirements, implying that a reduction by many orders of magnitude is necessary for the algorithm to become practical.

  15. Addressing Beyond Standard Model physics using cosmology

    NASA Astrophysics Data System (ADS)

    Ghalsasi, Akshay

    We have consensus models for both particle physics (i.e. standard model) and cosmology (i.e. LambdaCDM). Given certain assumptions about the initial conditions of the universe, the marriage of the standard model (SM) of particle physics and LambdaCDM cosmology has been phenomenally successful in describing the universe we live in. However it is quite clear that all is not well. The three biggest problems that the SM faces today are baryogenesis, dark matter and dark energy. These problems, along with the problem of neutrino masses, indicate the existence of physics beyond SM. Evidence of baryogenesis, dark matter and dark energy all comes from astrophysical and cosmological observations. Cosmology also provides the best (model dependent) constraints on neutrino masses. In this thesis I will try address the following problems 1) Addressing the origin of dark energy (DE) using non-standard neutrino cosmology and exploring the effects of the non-standard neutrino cosmology on terrestrial and cosmological experiments. 2) Addressing the matter anti-matter asymmetry of the universe.

  16. Divide et impera: subgoaling reduces the complexity of probabilistic inference and problem solving

    PubMed Central

    Maisto, Domenico; Donnarumma, Francesco; Pezzulo, Giovanni

    2015-01-01

    It has long been recognized that humans (and possibly other animals) usually break problems down into smaller and more manageable problems using subgoals. Despite a general consensus that subgoaling helps problem solving, it is still unclear what the mechanisms guiding online subgoal selection are during the solution of novel problems for which predefined solutions are not available. Under which conditions does subgoaling lead to optimal behaviour? When is subgoaling better than solving a problem from start to finish? Which is the best number and sequence of subgoals to solve a given problem? How are these subgoals selected during online inference? Here, we present a computational account of subgoaling in problem solving. Following Occam's razor, we propose that good subgoals are those that permit planning solutions and controlling behaviour using less information resources, thus yielding parsimony in inference and control. We implement this principle using approximate probabilistic inference: subgoals are selected using a sampling method that considers the descriptive complexity of the resulting sub-problems. We validate the proposed method using a standard reinforcement learning benchmark (four-rooms scenario) and show that the proposed method requires less inferential steps and permits selecting more compact control programs compared to an equivalent procedure without subgoaling. Furthermore, we show that the proposed method offers a mechanistic explanation of the neuronal dynamics found in the prefrontal cortex of monkeys that solve planning problems. Our computational framework provides a novel integrative perspective on subgoaling and its adaptive advantages for planning, control and learning, such as for example lowering cognitive effort and working memory load. PMID:25652466

  17. Cooperative Solutions in Multi-Person Quadratic Decision Problems: Finite-Horizon and State-Feedback Cost-Cumulant Control Paradigm

    DTIC Science & Technology

    2007-01-01

    CONTRACT NUMBER Problems: Finite -Horizon and State-Feedback Cost-Cumulant Control Paradigm (PREPRINT) 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...cooperative cost-cumulant control regime for the class of multi-person single-objective decision problems characterized by quadratic random costs and... finite -horizon integral quadratic cost associated with a linear stochastic system . Since this problem formation is parameterized by the number of cost

  18. Quality of blood culture testing - a survey in intensive care units and microbiological laboratories across four European countries

    PubMed Central

    2013-01-01

    Introduction Blood culture (BC) testing before initiation of antimicrobial therapy is recommended as a standard of care in international sepsis guidelines and has been shown to reduce intensive care unit (ICU) stay, antibiotic use, and costs in hospitalized patients. Whereas microbiological laboratory practice has been highly standardized, shortfalls in the preanalytic procedures in the ICU (that is indication, time-to-incubation, blood volume and numbers of BC sets) have a significant effect on the diagnostic yield. The objective of this study was to gain insights into current practices regarding BC testing in intensive care units. Methods Qualitative survey, data collection by 138 semi-structured telephone interviews in four European countries (Italy, UK, France and Germany) between September and November 2009 in 79 clinical microbiology laboratories (LABs) and 59 ICUs. Results Whereas BC testing is expected to remain the gold standard for sepsis diagnostics in all countries, there are substantial differences regarding preanalytic procedures. The decision to launch BC testing is carried out by physicians vs. ICU nurses in the UK in 92 vs. 8%, in France in 75 vs. 25%, in Italy in 88 vs. 12% and in Germany in 92 vs. 8%. Physicians vs. nurses collect BCs in the UK in 77 vs. 23%, in France in 0 vs. 100%, in Italy in 6 vs. 94% and in Germany in 54 vs. 46%. The mean time from blood collection to incubation in the UK is 2 h, in France 3 h, in Italy 4 h, but 20 h in German remote LABs (2 h in in-house LABs), due to the large number of remote nonresident microbiological laboratories in Germany. There were major differences between the perception of the quality of BC testing between ICUs and LABs. Among German ICU respondents, 62% reported that they have no problems with BC testing, 15% reported time constraints, 15% cost pressure, and only 8% too long time to incubation. However, the corresponding LABs of these German ICUs reported too many false positive results due to preanalytical contaminations (49%), insufficient numbers of incoming BC sets (47%), long transportation time (41%) or cost pressure (18%). Conclusions There are considerable differences in the quality of BC testing across European countries. In Germany, time to incubation is a considerable problem due to the increasing number of remote LABs. This is a major issue of concern to physicians aiming to implement sepsis guidelines in the ICUs. PMID:24144084

  19. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  20. Incorporating the Common Core's Problem Solving Standard for Mathematical Practice into an Early Elementary Inclusive Classroom

    ERIC Educational Resources Information Center

    Fletcher, Nicole

    2014-01-01

    Mathematics curriculum designers and policy decision makers are beginning to recognize the importance of problem solving, even at the earliest stages of mathematics learning. The Common Core includes sense making and perseverance in solving problems in its standards for mathematical practice for students at all grade levels. Incorporating problem…

  1. Promoting Access to Common Core Mathematics for Students with Severe Disabilities through Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Spooner, Fred; Saunders, Alicia; Root, Jenny; Brosh, Chelsi

    2017-01-01

    There is a need to teach the pivotal skill of mathematical problem solving to students with severe disabilities, moving beyond basic skills like computation to higher level thinking skills. Problem solving is emphasized as a Standard for Mathematical Practice in the Common Core State Standards across grade levels. This article describes a…

  2. Analyzing Multilevel Data: An Empirical Comparison of Parameter Estimates of Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2011-01-01

    Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…

  3. Water cycle algorithm: A detailed standard code

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  4. Constrained-pairing mean-field theory. IV. Inclusion of corresponding pair constraints and connection to unrestricted Hartree-Fock theory.

    PubMed

    Tsuchimochi, Takashi; Henderson, Thomas M; Scuseria, Gustavo E; Savin, Andreas

    2010-10-07

    Our previously developed constrained-pairing mean-field theory (CPMFT) is shown to map onto an unrestricted Hartree-Fock (UHF) type method if one imposes a corresponding pair constraint to the correlation problem that forces occupation numbers to occur in pairs adding to one. In this new version, CPMFT has all the advantages of standard independent particle models (orbitals and orbital energies, to mention a few), yet unlike UHF, it can dissociate polyatomic molecules to the correct ground-state restricted open-shell Hartree-Fock atoms or fragments.

  5. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  6. Evolution of a standard microprocessor-based space computer

    NASA Technical Reports Server (NTRS)

    Fernandez, M.

    1980-01-01

    An existing in inventory computer hardware/software package (B-1 RFS/ECM) was repackaged and applied to multiple missile/space programs. Concurrent with the application efforts, low risk modifications were made to the computer from program to program to take advantage of newer, advanced technology and to meet increasingly more demanding requirements (computational and memory capabilities, longer life, and fault tolerant autonomy). It is concluded that microprocessors hold promise in a number of critical areas for future space computer applications. However, the benefits of the DoD VHSIC Program are required and the old proliferation problem must be revised.

  7. A proposed atom interferometry determination of G at 10-5 using a cold atomic fountain

    NASA Astrophysics Data System (ADS)

    Rosi, G.

    2018-02-01

    In precision metrology, the determination of the Newtonian gravity constant G represents a real problem, since its history is plagued by huge unknown discrepancies between a large number of independent experiments. In this paper, we propose a novel experimental setup for measuring G with a relative accuracy of 10-5 , using a standard cold atomic fountain and matter wave interferometry. We discuss in detail the major sources of systematic errors, and provide the expected statistical uncertainty. The feasibility of determining G at the 10-6 level is also discussed.

  8. Risks associated with preweaning mortality in 855 litters on 39 commercial outdoor pig farms in England.

    PubMed

    KilBride, A L; Mendl, M; Statham, P; Held, S; Harris, M; Marchant-Forde, J N; Booth, H; Green, L E

    2014-11-01

    A prospective longitudinal study was carried out on 39 outdoor breeding pig farms in England in 2003 and 2004 to investigate the risks associated with mortality in liveborn preweaning piglets. Researchers visited each farm and completed a questionnaire with the farmer and made observations of the paddocks, huts and pigs. The farmer recorded the number of piglets born alive and stillborn, fostered on and off and the number of piglets that died before weaning for 20 litters born after the visit. Data were analysed from a cohort of 9424 liveborn piglets from 855 litters. Overall 1274 liveborn piglets (13.5%) died before weaning. A mixed effect binomial model was used to investigate the associations between preweaning mortality and farm and litter level factors, controlling for litter size and number of piglets stillborn and fostered. Increased risk of mortality was associated with fostering piglets over 24h of age, organic certification or membership of an assurance scheme with higher welfare standards, farmer's perception that there was a problem with pest birds, use of medication to treat coccidiosis and presence of lame sows on the farm. Reduced mortality was associated with insulated farrowing huts and door flaps, women working on the farm and the farmer reporting a problem with foxes. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  10. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources.

  11. Visual function, driving safety, and the elderly.

    PubMed

    Keltner, J L; Johnson, C A

    1987-09-01

    The authors have conducted a survey of the Departments of Motor Vehicles in all 50 states, the District of Columbia, and Puerto Rico requesting information about the visual standards, accidents, and conviction rates for different age groups. In addition, we have reviewed the literature on visual function and traffic safety. Elderly drivers have a greater number of vision problems that affect visual acuity and/or peripheral visual fields. Although the elderly are responsible for a small percentage of the total number of traffic accidents, the types of accidents they are involved in (e.g., failure to yield the right-of-way, intersection collisions, left turns onto crossing streets) may be related to peripheral and central visual field problems. Because age-related changes in performance occur at different rates for various individuals, licensing of the elderly driver should be based on functional abilities rather than age. Based on information currently available, we can make the following recommendations: (1) periodic evaluations of visual acuity and visual fields should be performed every 1 to 2 years in the population over age 65; (2) drivers of any age with multiple accidents or moving violations should have visual acuity and visual fields evaluated; and (3) a system should be developed for physicians to report patients with potentially unsafe visual function. The authors believe that these recommendations may help to reduce the number of traffic accidents that result from peripheral visual field deficits.

  12. Comparative Study on High-Order Positivity-preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry V.; Yee, Helen M.; Sjogreen, Bjorn Axel

    2013-01-01

    The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  13. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  14. 3 Tesla breast MR imaging as a problem-solving tool: Diagnostic performance and incidental lesions

    PubMed Central

    Spick, Claudio; Szolar, Dieter H. M.; Preidler, Klaus W.; Reittner, Pia; Rauch, Katharina; Brader, Peter; Tillich, Manfred

    2018-01-01

    Purpose To investigate the diagnostic performance and incidental lesion yield of 3T breast MRI if used as a problem-solving tool. Methods This retrospective, IRB-approved, cross-sectional, single-center study comprised 302 consecutive women (mean: 50±12 years; range: 20–79 years) who were undergoing 3T breast MRI between 03/2013-12/2014 for further workup of conventional and clinical breast findings. Images were read by experienced, board-certified radiologists. The reference standard was histopathology or follow-up ≥ two years. Sensitivity, specificity, PPV, and NPV were calculated. Results were stratified by conventional and clinical breast findings. Results The reference standard revealed 53 true-positive, 243 true-negative, 20 false-positive, and two false-negative breast MRI findings, resulting in a sensitivity, specificity, PPV, and NPV of 96.4% (53/55), 92.4% (243/263), 72.6% (53/73), and 99.2% (243/245), respectively. In 5.3% (16/302) of all patients, incidental MRI lesions classified BI-RADS 3–5 were detected, 37.5% (6/16) of which were malignant. Breast composition and the imaging findings that had led to referral had no significant influence on the diagnostic performance of breast MR imaging (p>0.05). Conclusion 3T breast MRI yields excellent diagnostic results if used as a problem-solving tool independent of referral reasons. The number of suspicious incidental lesions detected by MRI is low, but is associated with a substantial malignancy rate. PMID:29293582

  15. Area Under the Curve as a Novel Metric of Behavioral Economic Demand for Alcohol

    PubMed Central

    Amlung, Michael; Yurasek, Ali; McCarty, Kayleigh N.; MacKillop, James; Murphy, James G.

    2015-01-01

    Behavioral economic purchase tasks can be readily used to assess demand for a number of addictive substances including alcohol, tobacco and illicit drugs. However, several methodological limitations associated with the techniques used to quantify demand may reduce the utility of demand measures. In the present study, we sought to introduce area under the curve (AUC), commonly used to quantify degree of delay discounting, as a novel index of demand. A sample of 207 heavy drinking college students completed a standard alcohol purchase task and provided information about typical weekly drinking patterns and alcohol-related problems. Level of alcohol demand was quantified using AUC – which reflects the entire amount of consumption across all drink prices - as well as the standard demand indices (e.g., intensity, breakpoint, Omax, Pmax, and elasticity). Results indicated that AUC was significantly correlated with each of the other demand indices (rs = .42–.92), with particularly strong associations with Omax (r = .92). In regression models, AUC and intensity were significant predictors of weekly drinking quantity and AUC uniquely predicted alcohol-related problems, even after controlling for drinking level. In a parallel set of analyses, Omax also predicted drinking quantity and alcohol problems, although Omax was not a unique predictor of the latter. These results offer initial support for using AUC as an index of alcohol demand. Additional research is necessary to further validate this approach and to examine its utility in quantifying demand for other addictive substances such as tobacco and illicit drugs. PMID:25895013

  16. Behavioral Treatment of Voice Disorders in Teachers

    PubMed Central

    Ziegler, Aaron; Gillespie, Amanda I.; Verdolini Abbott, Katherine

    2010-01-01

    Introduction The purpose of this paper is to review the literature on the behavioral treatment of voice disorders in teachers. The focus is on phonogenic disorders, that is voice disorders thought to be caused by voice use. Methods Review of the literature and commentary. Results The review exposes distinct holes in the literature on the treatment of voice problems in teachers. However, emerging trends in treatment are noted. For example, most studies identified for review implemented a multiple-therapy approach in a group setting, in contrast to only a few studies that assessed a single-therapy approach with individual patients. Although the review reveals that the evidence around behavioral treatment of voice disorders in teachers is mixed, a growing body of data provides some indicators on how effectively rehabilitation of teachers with phonogenic voice problems might be approached. Specifically, voice amplification demonstrates promise as a beneficial type of indirect therapy and vocal function exercises as well as resonant voice therapy show possible benefits as direct therapies. Finally, only a few studies identified even remotely begin to meet guidelines of the Consolidated Standards of Reporting Trials statement, a finding that emphasizes the need to increase the number of investigations that adhere to strict research standards. Conclusions Although data on the treatment of voice problems in teachers are still limited in the literature, emerging trends are noted. The accumulation of sufficient studies will ultimately provide useful evidence about this societally important issue. PMID:20093840

  17. The Problem of Correspondence of Educational and Professional Standards (Results of Empirical Research)

    ERIC Educational Resources Information Center

    Piskunova, Elena; Sokolova, Irina; Kalimullin, Aydar

    2016-01-01

    In the article, the problem of correspondence of educational standards of higher pedagogical education and teacher professional standards in Russia is actualized. Modern understanding of the quality of vocational education suggests that in the process of education the student develops a set of competencies that will enable him or her to carry out…

  18. Status and analysis of test standard for on-board charger

    NASA Astrophysics Data System (ADS)

    Hou, Shuai; Liu, Haiming; Jiang, Li; Chen, Xichen; Ma, Junjie; Zhao, Bing; Wu, Zaiyuan

    2018-05-01

    This paper analyzes the test standards of on -board charger (OBC). In the process of testing, we found that there exists some problems in test method and functional status, such as failed to follow up the latest test standards, estimated loosely, rectification uncertainty and consistency. Finally, putting forward some own viewpoints on these problems.

  19. A simple method to relate microwave radiances to upper tropospheric humidity

    NASA Astrophysics Data System (ADS)

    Buehler, S. A.; John, V. O.

    2005-01-01

    A brightness temperature (BT) transformation method can be applied to microwave data to retrieve Jacobian weighted upper tropospheric relative humidity (UTH) in a broad layer centered roughly between 6 and 8 km altitude. The UTH bias is below 4% RH, and the relative UTH bias below 20%. The UTH standard deviation is between 2 and 6.5% RH in absolute numbers, or between 10 and 27% in relative numbers. The standard deviation is dominated by the regression noise, resulting from vertical structure not accounted for by the simple transformation relation. The UTH standard deviation due to radiometric noise alone has a relative standard deviation of approximately 7% for a radiometric noise level of 1 K. The retrieval performance was shown to be of almost constant quality for all viewing angles and latitudes, except for problems at high latitudes due to surface effects. A validation of AMSU UTH against radiosonde UTH shows reasonable agreement if known systematic differences between AMSU and radiosonde are taken into account. When the method is applied to supersaturation studies, regression noise and radiometric noise could lead to an apparent supersaturation even if there were no supersaturation. For a radiometer noise level of 1 K the drop-off slope of the apparent supersaturation is 0.17% RH-1, for a noise level of 2 K the slope is 0.12% RH-1. The main conclusion from this study is that the BT transformation method is very well suited for microwave data. Its particular strength is in climatological applications where the simplicity and the a priori independence are key advantages.

  20. Survey on air pollution and cardiopulmonary mortality in shiraz from 2011 to 2012: an analytical-descriptive study.

    PubMed

    Dehghani, Mansooreh; Anushiravani, Amir; Hashemi, Hassan; Shamsedini, Narges

    2014-06-01

    Expanding cities with rapid economic development has resulted in increased energy consumption leading to numerous environmental problems for their residents. The aim of this study was to investigate the correlation between air pollution and mortality rate due to cardiovascular and respiratory diseases in Shiraz. This is an analytical cross-sectional study in which the correlation between major air pollutants (including carbon monoxide [CO], sulfur dioxide [SO2], nitrogen dioxide [NO2] and particle matter with a diameter of less than 10 μ [PM10]) and climatic parameters (temperature and relative humidity) with the number of those whom expired from cardiopulmonary disease in Shiraz from March 2011 to January 2012 was investigated. Data regarding the concentration of air pollutants were determined by Shiraz Environmental Organization. Information about climatic parameters was collected from the database of Iran's Meteorological Organization. The number of those expired from cardiopulmonary disease in Shiraz were provided by the Department of Health, Shiraz University of Medical Sciences. We used non-parametric correlation test to analyze the relationship between these parameters. The results demonstrated that in all the recorded data, the average monthly pollutants standard index (PSI) values of PM10 were higher than standard limits, while the average monthly PSI value of NO2 were lower than standard. There was no significant relationship between the number of those expired from cardiopulmonary disease and the air pollutant (P > 0.05). Air pollution can aggravate chronic cardiopulmonary disease. In the current study, one of the most important air pollutants in Shiraz was the PM10 component. Mechanical processes, such as wind blowing from neighboring countries, is the most important parameter increasing PM10 in Shiraz to alarming conditions. The average monthly variation in PSI values of air pollutants such as NO2, CO, and SO2 were lower than standard limits. Moreover, there was no significant correlation between the average monthly variation in PSI of NO2, CO, PM10, and SO2 and the number of those expired from cardiopulmonary disease in Shiraz.

  1. Strategies of Pre-Service Primary School Teachers for Solving Addition Problems with Negative Numbers

    ERIC Educational Resources Information Center

    Almeida, Rut; Bruno, Alicia

    2014-01-01

    This paper analyses the strategies used by pre-service primary school teachers for solving simple addition problems involving negative numbers. The findings reveal six different strategies that depend on the difficulty of the problem and, in particular, on the unknown quantity. We note that students use negative numbers in those problems they find…

  2. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  3. Using Vegetation Barriers to Improving Wireless Network Isolation and Security

    NASA Astrophysics Data System (ADS)

    Cuiñas, Iñigo; Gómez, Paula; Sánchez, Manuel García; Alejos, Ana Vázquez

    The increasing number of wireless LANs using the same spectrum allocation could induce multiple interferences and it also could force the active LANs to continuously retransmit data in order to solve this problem: this solution overloads the spectrum bands as well as collapses the LAN transmission capacity. This upcoming problem can be mitigated by using different techniques, being site shielding one of them. If radio systems could be safeguarded against radiation from transmitters out of the specific network, the frequency reuse is improved and, as a consequence, the number of WLANs sharing the same area may increase maintaining the required quality standards. The proposal of this paper is the use of bushes as a hurdle to attenuate signals from other networks and, so that, to defend the own wireless system from outer interferences. A measurement campaign has been performed in order to test this application of vegetal elements. This campaign was focused on determining the attenuation induced by several specimens of seven different vegetal species. Then, the relation between the induced attenuation and the interference from adjacent networks has been computed in terms of separation between networks. The network protection against outer unauthorized access could be also improved by means of the proposed technique.

  4. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  5. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Instrumentation For Measuring Finish, Defects And Gloss

    NASA Astrophysics Data System (ADS)

    Whitehouse, David J.

    1985-09-01

    The measurement of fine surfaces optical finishes and flaws is becoming more important because of a number of factors. One of these is the hunt for better quality of conformance another is the smoother surfaces required in present day applications such as found in the computer and video industries. Defects such as scratches, pits or cracks cannot only impair the cosmetic appearance of the object, they can actually cause premature failure as in fatigue or corrosion. These new measuring criteria have caused some real problems to instrument makers. In the case of defects the problem is that of spatial bandwidth; that is the problem of searching for a small scratch over a wide area. When measuring fine surfaces the problem is usually the signal to noise ratio of the instrument itself. In many instances the search for defects or the measurement of fine surfaces has been left to human judgement - a powerful if unpredictable measuring tool. This is becoming unsatisfactory because standards have sometimes been built into commercial evaluation of quality based upon the eye. This is rather unfortunate; it ties the hands of the instrument maker who for compatibility has to try to simulate the eye or use indirect measurements.

  7. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  8. Research in Theoretical High Energy Physics- Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika

    PI Dr. Okada’s research interests are centered on phenomenological aspects of particle physics. It has been abundantly clear in recent years that an extension of the Standard Model (SM), i.e. new physics beyond the SM, is needed to explain a number of experimental observations such as the neutrino oscillation data, the existence of non-baryonic dark matter, and the observed baryon asymmetry of the Universe. In addition, the SM suffers from several theoretical/conceptual problems, such as the gauge hierarchy problem, the fermion mass hierarchy problem, and the origin of the electroweak symmetry breaking. It is believed that these problems can alsomore » be solved by new physics beyond the SM. The main purpose of the Dr. Okada’s research is a theoretical investigation of new physics opportunities from various phenomenological points of view, based on the recent progress of experiments/observations in particle physics and cosmology. There are many possibilities to go beyond the SM and many new physics models have been proposed. The major goal of the project is to understand the current status of possible new physics models and obtain the future prospects of new physics phenomena toward their discoveries.« less

  9. Optimal solution of full fuzzy transportation problems using total integral ranking

    NASA Astrophysics Data System (ADS)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  10. Design of a syndemic based intervention to facilitate care for men who have sex with men with high risk behaviour: the syn.bas.in randomized controlled trial.

    PubMed

    Achterbergh, Roel C A; van der Helm, Jannie J; van den Brink, Wim; de Vries, Henry J C

    2017-06-06

    Men who have sex with men (MSM) constitute a risk group for sexual transmitted infections (STIs), including HIV. Despite counselling interventions, risk behaviour remains high. Syndemic theory holds that psychosocial problems often co-occur, interact and mutually reinforce each other, thereby increasing high risk behaviours and co-occurring diseases. Therefore, if co-occurring psychosocial problems were assessed and treated simultaneously, this might decrease high risk behaviour and disease. An open label randomized controlled trial will be conducted among 150 MSM with high risk behaviour recruited from the STI clinic of Amsterdam. Inclusion criteria are: HIV negative MSM with two STI and/or PEP treatment in the last 24 months, or HIV positive MSM with one STI in the last 24 months. All participants get questionnaires on the following syndemic domains: ADHD, depression, anxiety disorder, alexithymia and sex- and drug addiction. Participants in the control group receive standard care: STI screenings every three months and motivational interviewing based counselling. Participants in the experimental group receive standard care plus feedback based on the results of the questionnaires. All participants can be referred to co-located mental health or addiction services. The primary outcome is help seeking behaviour for mental health problems and/or drug use problems. The secondary outcomes are STI incidence and changes in sexual risk behaviour (i.e. condom use, number of anal sex partners, drug use during sex). This study will provide information on syndemic domains among MSM who show high risk behaviour and on the effect of screening and referral on help seeking behaviour and health (behaviour) outcomes. Trial Registration at clinicaltrail.gov, identifier NCT02859935 .

  11. Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

    NASA Astrophysics Data System (ADS)

    Alemany, Kristina

    Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.

  12. Individualized Math Problems in Whole Numbers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this set require computations involving whole numbers.…

  13. How do reference montage and electrodes setup affect the measured scalp EEG potentials?

    NASA Astrophysics Data System (ADS)

    Hu, Shiang; Lai, Yongxiu; Valdes-Sosa, Pedro A.; Bringas-Vega, Maria L.; Yao, Dezhong

    2018-04-01

    Objective. Human scalp electroencephalogram (EEG) is widely applied in cognitive neuroscience and clinical studies due to its non-invasiveness and ultra-high time resolution. However, the representativeness of the measured EEG potentials for the underneath neural activities is still a problem under debate. This study aims to investigate systematically how both reference montage and electrodes setup affect the accuracy of EEG potentials. Approach. First, the standard EEG potentials are generated by the forward calculation with a single dipole in the neural source space, for eleven channel numbers (10, 16, 21, 32, 64, 85, 96, 128, 129, 257, 335). Here, the reference is the ideal infinity implicitly determined by forward theory. Then, the standard EEG potentials are transformed to recordings with different references including five mono-polar references (Left earlobe, Fz, Pz, Oz, Cz), and three re-references (linked mastoids (LM), average reference (AR) and reference electrode standardization technique (REST)). Finally, the relative errors between the standard EEG potentials and the transformed ones are evaluated in terms of channel number, scalp regions, electrodes layout, dipole source position and orientation, as well as sensor noise and head model. Main results. Mono-polar reference recordings are usually of large distortions; thus, a re-reference after online mono-polar recording should be adopted in general to mitigate this effect. Among the three re-references, REST is generally superior to AR for all factors compared, and LM performs worst. REST is insensitive to head model perturbation. AR is subject to electrodes coverage and dipole orientation but no close relation with channel number. Significance. These results indicate that REST would be the first choice of re-reference and AR may be an alternative option for high level sensor noise case. Our findings may provide the helpful suggestions on how to obtain the EEG potentials as accurately as possible for cognitive neuroscientists and clinicians.

  14. Sigma Routing Metric for RPL Protocol.

    PubMed

    Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian

    2018-04-21

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.

  15. Sigma Routing Metric for RPL Protocol

    PubMed Central

    Rojas, Aldo; Fernandez, Luis

    2018-01-01

    This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524

  16. Confocal endomicroscopy: Is it time to move on?

    PubMed

    Robles-Medranda, Carlos

    2016-01-10

    Confocal laser endomicroscopy permits in-vivo microscopy evaluation during endoscopy procedures. It can be used in all the parts of the gastrointestinal tract and includes: Esophagus, stomach, small bowel, colon, biliary tract through and endoscopic retrograde cholangiopancreatography and pancreas through needles during endoscopic ultrasound procedures. Many researches demonstrated a high correlation of results between confocal laser endomicroscopy and histopathology in the diagnosis of gastrointestinal lesions; with accuracy in about 86% to 96%. Moreover, in spite that histopathology remains the gold-standard technique for final diagnosis of any diseases; a considerable number of misdiagnosis rate could be present due to many factors such as interpretation mistakes, biopsy site inaccuracy, or number of biopsies. Theoretically; with the diagnostic accuracy rates of confocal laser endomicroscopy could help in a daily practice to improve diagnosis and treatment management of the patients. However, it is still not routinely used in the clinical practice due to many factors such as cost of the procedure, lack of codification and reimbursement in some countries, absence of standard of care indications, availability, physician image-interpretation training, medico-legal problems, and the role of the pathologist. These limitations are relative, and solutions could be found based on new researches focused to solve these barriers.

  17. Numerical Coupling and Simulation of Point-Mass System with the Turbulent Fluid Flow

    NASA Astrophysics Data System (ADS)

    Gao, Zheng

    A computational framework that combines the Eulerian description of the turbulence field with a Lagrangian point-mass ensemble is proposed in this dissertation. Depending on the Reynolds number, the turbulence field is simulated using Direct Numerical Simulation (DNS) or eddy viscosity model. In the meanwhile, the particle system, such as spring-mass system and cloud droplets, are modeled using the ordinary differential system, which is stiff and hence poses a challenge to the stability of the entire system. This computational framework is applied to the numerical study of parachute deceleration and cloud microphysics. These two distinct problems can be uniformly modeled with Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs), and numerically solved in the same framework. For the parachute simulation, a novel porosity model is proposed to simulate the porous effects of the parachute canopy. This model is easy to implement with the projection method and is able to reproduce Darcy's law observed in the experiment. Moreover, the impacts of using different versions of k-epsilon turbulence model in the parachute simulation have been investigated and conclude that the standard and Re-Normalisation Group (RNG) model may overestimate the turbulence effects when Reynolds number is small while the Realizable model has a consistent performance with both large and small Reynolds number. For another application, cloud microphysics, the cloud entrainment-mixing problem is studied in the same numerical framework. Three sets of DNS are carried out with both decaying and forced turbulence. The numerical result suggests a new way parameterize the cloud mixing degree using the dynamical measures. The numerical experiments also verify the negative relationship between the droplets number concentration and the vorticity field. The results imply that the gravity has fewer impacts on the forced turbulence than the decaying turbulence. In summary, the proposed framework can be used to solve a physics problem that involves turbulence field and point-mass system, and therefore has a broad application.

  18. Guidelines for obstetrical practice in Japan: Japan Society of Obstetrics and Gynecology (JSOG) and Japan Association of Obstetricians and Gynecologists (JAOG) 2014 edition.

    PubMed

    Minakami, Hisanori; Maeda, Tsugio; Fujii, Tomoyuki; Hamada, Hiromi; Iitsuka, Yoshinori; Itakura, Atsuo; Itoh, Hiroaki; Iwashita, Mitsutoshi; Kanagawa, Takeshi; Kanai, Makoto; Kasuga, Yoshio; Kawabata, Masakiyo; Kobayashi, Kosuke; Kotani, Tomomi; Kudo, Yoshiki; Makino, Yasuo; Matsubara, Shigeki; Matsuda, Hideo; Miura, Kiyonori; Murakoshi, Takeshi; Murotsuki, Jun; Ohkuchi, Akihide; Ohno, Yasumasa; Ohshiba, Yoko; Satoh, Shoji; Sekizawa, Akihiko; Sugiura, Mayumi; Suzuki, Shunji; Takahashi, Tsuneo; Tsukahara, Yuki; Unno, Nobuya; Yoshikawa, Hiroyuki

    2014-06-01

    The 'Clinical Guidelines for Obstetrical Practice, 2011 edition' were revised and published as a 2014 edition (in Japanese) in April 2014 by the Japan Society of Obstetrics and Gynecology and the Japan Association of Obstetricians and Gynecologists. The aims of this publication include the determination of current standard care practices for pregnant women in Japan, the widespread use of standard care practices, the enhancement of safety in obstetrical practice, the reduction of burdens associated with medico-legal and medico-economical problems, and a better understanding between pregnant women and maternity-service providers. The number of Clinical Questions and Answers items increased from 87 in the 2011 edition to 104 in the 2014 edition. The Japanese 2014 version included a Discussion, a List of References, and some Tables and Figures following the Answers to the 104 Clinical Questions; these additional sections covered common problems and questions encountered in obstetrical practice, helping Japanese readers to achieve a comprehensive understanding. Each answer with a recommendation level of A, B or C was prepared based principally on 'evidence' or a consensus among Japanese obstetricians in situations where 'evidence' was weak or lacking. Answers with a recommendation level of A or B represent current standard care practices in Japan. All 104 Clinical Questions and Answers items, with the omission of the Discussion, List of References, and Tables and Figures, are presented herein to promote a better understanding among English readers of the current standard care practices for pregnant women in Japan. © 2014 The Authors. Journal of Obstetrics and Gynaecology Research © 2014 Japan Society of Obstetrics and Gynecology.

  19. On redundant variables in Lagrangian mechanics, with applications to perturbation theory and KS regularization. [Kustaanheimo-Stiefel two body problem

    NASA Technical Reports Server (NTRS)

    Broucke, R.; Lass, H.

    1975-01-01

    It is shown that it is possible to make a change of variables in a Lagrangian in such a way that the number of variables is increased. The Euler-Lagrange equations in the redundant variables are obtained in the standard way (without the use of Lagrange multipliers). These equations are not independent but they are all valid and consistent. In some cases they are simpler than if the minimum number of variables are used. The redundant variables are supposed to be related to each other by several constraints (not necessarily holonomic), but these constraints are not used in the derivation of the equations of motion. The method is illustrated with the well known Kustaanheimo-Stiefel regularization. Some interesting applications to perturbation theory are also described.

  20. Neutrino experiments

    DOE PAGES

    Lesko, K. T.

    2004-02-24

    This review examines a wide variety of experiments investigating neutrino interactions and neutrino properties from a variety of neutrino sources. We have witnessed remarkable progress in the past two years in settling long standing problems in neutrino physics and uncovering the first evidence for physics beyond the Standard Model in nearly 30 years. Here this paper briefly reviews this recent progress in the field of neutrino physics and highlights several significant experimental arenas and topics for the coming decade of particular interest. These highlighted experiments include the precision determination of oscillation parameters including θ 13, θ 12, Δm 12 2more » and Δm 23 2 as well as a number of fundamental properties are likely to be probed included nature of the neutrino (Majorana versus Dirac), the number of neutrino families and the neutrino’s absolute mass.« less

  1. Qualitative review of usability problems in health information systems for radiology.

    PubMed

    Dias, Camila Rodrigues; Pereira, Marluce Rodrigues; Freire, André Pimenta

    2017-12-01

    Radiology processes are commonly supported by Radiology Information System (RIS), Picture Archiving and Communication System (PACS) and other software for radiology. However, these information technologies can present usability problems that affect the performance of radiologists and physicians, especially considering the complexity of the tasks involved. The purpose of this study was to extract, classify and analyze qualitatively the usability problems in PACS, RIS and other software for radiology. A systematic review was performed to extract usability problems reported in empirical usability studies in the literature. The usability problems were categorized as violations of Nielsen and Molich's usability heuristics. The qualitative analysis indicated the causes and the effects of the identified usability problems. From the 431 papers initially identified, 10 met the study criteria. The analysis of the papers identified 90 instances of usability problems, classified into categories corresponding to established usability heuristics. The five heuristics with the highest number of instances of usability problems were "Flexibility and efficiency of use", "Consistency and standards", "Match between system and the real world", "Recognition rather than recall" and "Help and documentation", respectively. These problems can make the interaction time consuming, causing delays in tasks, dissatisfaction, frustration, preventing users from enjoying all the benefits and functionalities of the system, as well as leading to more errors and difficulties in carrying out clinical analyses. Furthermore, the present paper showed a lack of studies performed on systems for radiology, especially usability evaluations using formal methods of evaluation involving the final users. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Minimum Information about T Regulatory Cells: A Step toward Reproducibility and Standardization.

    PubMed

    Fuchs, Anke; Gliwiński, Mateusz; Grageda, Nathali; Spiering, Rachel; Abbas, Abul K; Appel, Silke; Bacchetta, Rosa; Battaglia, Manuela; Berglund, David; Blazar, Bruce; Bluestone, Jeffrey A; Bornhäuser, Martin; Ten Brinke, Anja; Brusko, Todd M; Cools, Nathalie; Cuturi, Maria Cristina; Geissler, Edward; Giannoukakis, Nick; Gołab, Karolina; Hafler, David A; van Ham, S Marieke; Hester, Joanna; Hippen, Keli; Di Ianni, Mauro; Ilic, Natasa; Isaacs, John; Issa, Fadi; Iwaszkiewicz-Grześ, Dorota; Jaeckel, Elmar; Joosten, Irma; Klatzmann, David; Koenen, Hans; van Kooten, Cees; Korsgren, Olle; Kretschmer, Karsten; Levings, Megan; Marek-Trzonkowska, Natalia Maria; Martinez-Llordella, Marc; Miljkovic, Djordje; Mills, Kingston H G; Miranda, Joana P; Piccirillo, Ciriaco A; Putnam, Amy L; Ritter, Thomas; Roncarolo, Maria Grazia; Sakaguchi, Shimon; Sánchez-Ramón, Silvia; Sawitzki, Birgit; Sofronic-Milosavljevic, Ljiljana; Sykes, Megan; Tang, Qizhi; Vives-Pi, Marta; Waldmann, Herman; Witkowski, Piotr; Wood, Kathryn J; Gregori, Silvia; Hilkens, Catharien M U; Lombardi, Giovanna; Lord, Phillip; Martinez-Caceres, Eva M; Trzonkowski, Piotr

    2017-01-01

    Cellular therapies with CD4+ T regulatory cells (Tregs) hold promise of efficacious treatment for the variety of autoimmune and allergic diseases as well as posttransplant complications. Nevertheless, current manufacturing of Tregs as a cellular medicinal product varies between different laboratories, which in turn hampers precise comparisons of the results between the studies performed. While the number of clinical trials testing Tregs is already substantial, it seems to be crucial to provide some standardized characteristics of Treg products in order to minimize the problem. We have previously developed reporting guidelines called minimum information about tolerogenic antigen-presenting cells, which allows the comparison between different preparations of tolerance-inducing antigen-presenting cells. Having this experience, here we describe another minimum information about Tregs (MITREG). It is important to note that MITREG does not dictate how investigators should generate or characterize Tregs, but it does require investigators to report their Treg data in a consistent and transparent manner. We hope this will, therefore, be a useful tool facilitating standardized reporting on the manufacturing of Tregs, either for research purposes or for clinical application. This way MITREG might also be an important step toward more standardized and reproducible testing of the Tregs preparations in clinical applications.

  3. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.

    PubMed

    Glover, Jack L; Hudson, Lawrence T

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.

  4. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    PubMed Central

    Glover, Jack L.; Hudson, Lawrence T.

    2016-01-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586

  5. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Glover, Jack L.; Hudson, Lawrence T.

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.

  6. Pupils' Visual Representations in Standard and Problematic Problem Solving in Mathematics: Their Role in the Breach of the Didactical Contract

    ERIC Educational Resources Information Center

    Deliyianni, Eleni; Monoyiou, Annita; Elia, Iliada; Georgiou, Chryso; Zannettou, Eleni

    2009-01-01

    This study investigated the modes of representations generated by kindergarteners and first graders while solving standard and problematic problems in mathematics. Furthermore, it examined the influence of pupils' visual representations on the breach of the didactical contract rules in problem solving. The sample of the study consisted of 38…

  7. Application of a Mixed Consequential Ethical Model to a Problem Regarding Test Standards.

    ERIC Educational Resources Information Center

    Busch, John Christian

    The work of the ethicist Charles Curran and the problem-solving strategy of the mixed consequentialist ethical model are applied to a traditional social science measurement problem--that of how to adjust a recommended standard in order to be fair to the test-taker and society. The focus is on criterion-referenced teacher certification tests.…

  8. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  9. Entanglement-assisted quantum feedback control

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; Mikami, Tomoaki

    2017-07-01

    The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.

  10. A multi-criteria decision analysis perspective on the health economic evaluation of medical interventions.

    PubMed

    Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik

    2014-09-01

    A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.

  11. Generalized Grover's Algorithm for Multiple Phase Inversion States

    NASA Astrophysics Data System (ADS)

    Byrnes, Tim; Forster, Gary; Tessler, Louis

    2018-02-01

    Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.

  12. A platform for exploration into chaining of web services for clinical data transformation and reasoning.

    PubMed

    Maldonado, José Alberto; Marcos, Mar; Fernández-Breis, Jesualdo Tomás; Parcero, Estíbaliz; Boscá, Diego; Legaz-García, María Del Carmen; Martínez-Salvador, Begoña; Robles, Montserrat

    2016-01-01

    The heterogeneity of clinical data is a key problem in the sharing and reuse of Electronic Health Record (EHR) data. We approach this problem through the combined use of EHR standards and semantic web technologies, concretely by means of clinical data transformation applications that convert EHR data in proprietary format, first into clinical information models based on archetypes, and then into RDF/OWL extracts which can be used for automated reasoning. In this paper we describe a proof-of-concept platform to facilitate the (re)configuration of such clinical data transformation applications. The platform is built upon a number of web services dealing with transformations at different levels (such as normalization or abstraction), and relies on a collection of reusable mappings designed to solve specific transformation steps in a particular clinical domain. The platform has been used in the development of two different data transformation applications in the area of colorectal cancer.

  13. Measurement-device-independent quantum key distribution.

    PubMed

    Lo, Hoi-Kwong; Curty, Marcos; Qi, Bing

    2012-03-30

    How to remove detector side channel attacks has been a notoriously hard problem in quantum cryptography. Here, we propose a simple solution to this problem--measurement-device-independent quantum key distribution (QKD). It not only removes all detector side channels, but also doubles the secure distance with conventional lasers. Our proposal can be implemented with standard optical components with low detection efficiency and highly lossy channels. In contrast to the previous solution of full device independent QKD, the realization of our idea does not require detectors of near unity detection efficiency in combination with a qubit amplifier (based on teleportation) or a quantum nondemolition measurement of the number of photons in a pulse. Furthermore, its key generation rate is many orders of magnitude higher than that based on full device independent QKD. The results show that long-distance quantum cryptography over say 200 km will remain secure even with seriously flawed detectors.

  14. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  15. “Putting It All Together” to Improve Resuscitation Quality

    PubMed Central

    Sutton, Robert M.; Nadkarni, Vinay; Abella, Benjamin S.

    2013-01-01

    Cardiac arrest is a major public health problem affecting thousands of individuals each year in both the before hospital and in-hospital settings. However, although the scope of the problem is large, the quality of care provided during resuscitation attempts frequently does not meet quality of care standards, despite evidence-based cardiopulmonary resuscitation (CPR) guidelines, extensive provider training, and provider credentialing in resuscitation medicine. Although this fact may be disappointing, it should not be surprising. Resuscitation of the cardiac arrest victim is a highly complex task requiring coordination between various levels and disciplines of care providers during a stressful and relatively infrequent clinical situation. Moreover, it requires a targeted, high-quality response to improve clinical outcomes of patients. Therefore, solutions to improve care provided during resuscitation attempts must be multifaceted and targeted to the diverse number of care providers to be successful. PMID:22107978

  16. Chinese ethics review system and Chinese medicine ethical review: past, present, and future.

    PubMed

    Li, En-Chang; Du, Ping; Ji, Ke-Zhou; Wang, Zhen

    2011-11-01

    The Chinese medical ethics committee and the ethical review system have made the following achievements: (1) enabled the institutionalization of medical ethics, (2) carried out the ethics review of Chinese medicine (CM) and integrative medicine extensively, (3) trained a large number of ethical professionals, (4) supported and protected the interests of patients and subjects, and (5) ensured the correct direction of biological research and provided ethical defense for the publication of its results. However, at the same time, they are also faced with some new problems and difficulties that need to be resolved in the following ways: (1) to refine the relevant rules of ethical review, (2) to develop the relevant standards of the CM and integrative medicine ethical review, (3) to enhance the independence and authority of ethics committee, (4) to emphasize innovation and to discover and solve new problems, and (5) to increase international exchanges and improve relevant research.

  17. Standardization in software conversion of (ROM) estimating

    NASA Technical Reports Server (NTRS)

    Roat, G. H.

    1984-01-01

    Technical problems and their solutions comprise by far the majority of work involved in space simulation engineering. Fixed price contracts with schedule award fees are becoming more and more prevalent. Accurate estimation of these jobs is critical to maintain costs within limits and to predict realistic contract schedule dates. Computerized estimating may hold the answer to these new problems, though up to now computerized estimating has been complex, expensive, and geared to the business world, not to technical people. The objective of this effort was to provide a simple program on a desk top computer capable of providing a Rough Order of Magnitude (ROM) estimate in a short time. This program is not intended to provide a highly detailed breakdown of costs to a customer, but to provide a number which can be used as a rough estimate on short notice. With more debugging and fine tuning, a more detailed estimate can be made.

  18. Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.

    PubMed

    Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich

    2018-04-25

    A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.

  19. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Rodney O.; Passalacqua, Alberto

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less

  20. Could one of the most widely prescribed antibiotics amoxicillin/clavulanate "augmentin" be a risk factor for autism?

    PubMed

    Fallon, Joan

    2005-01-01

    Autism is an ever increasing problem in the United States. Characterized by multiple deficits in the areas of communication, development, and behavior; autistic children are found in every community in this country and abroad. Recent findings point to a significant increase in autism which can not be accounted for by means such as misclassification. The state of California recently reported a 273% increase in the number of cases between 1987 and 1998. Many possible causes have been proposed which range from genetics to environment, with a combination of the two most likely. Since the introduction of clavulanate/amoxicillin in the 1980s there has been the increase in numbers of cases of autism. In this study 206 children under the age of three years with autism were screened by means of a detailed case history. A significant commonality was discerned and that being the level of chronic otitis media. These children were found to have a mean number 9.96 bouts of otitis media (with a standard error of the mean of +/-1.83). This represents a sum total for all 206 children of 2052 bouts of otitis media. These children received a mean number of 12.04 courses of antibiotics (standard error of the mean of +/-.125). The sum total number of courses of antibiotics given to all 206 children was 2480. Of those 893 courses were Augmentin. with 362 of these Augmentin courses administered under the age of one year. A proposed mechanism whereby the production of clavulanate may yield high levels of urea/ammonia in the child is presented. Further an examination of this mechanism needs to be undertaken to determine if a subset of children are at risk for neurotoxicity from the use of clavulanic acid in pharmaceutical preparations.

  1. Methodological Issues in Antifungal Susceptibility Testing of Malassezia pachydermatis

    PubMed Central

    Peano, Andrea; Pasquetti, Mario; Tizzani, Paolo; Chiavassa, Elisa; Guillot, Jacques; Johnson, Elizabeth

    2017-01-01

    Reference methods for antifungal susceptibility testing of yeasts have been developed by the Clinical and Laboratory Standards Institute (CLSI) and the European Committee on Antibiotic Susceptibility Testing (EUCAST). These methods are intended to test the main pathogenic yeasts that cause invasive infections, namely Candida spp. and Cryptococcus neoformans, while testing other yeast species introduces several additional problems in standardization not addressed by these reference procedures. As a consequence, a number of procedures have been employed in the literature to test the antifungal susceptibility of Malassezia pachydermatis. This has resulted in conflicting results. The aim of the present study is to review the procedures and the technical parameters (growth media, inoculum preparation, temperature and length of incubation, method of reading) employed for susceptibility testing of M. pachydermatis, and when possible, to propose recommendations for or against their use. Such information may be useful for the future development of a reference assay. PMID:29371554

  2. Car seat safety: literature review.

    PubMed

    Lincoln, Michelle

    2005-01-01

    After staggering numbers of infants were killed in automotive crashes in the 1970s, the American Academy of Pediatrics (AAP) recommended in 1974 universal use of car seats for all infants. However, positional problems were reported when car seats are used with premature infants less than 37 weeks gestational age as a result of head slouching and its sequelae. In 1990, the AAP responded with another policy statement introducing car seat testing. It recommended that any infant at or under 37 weeks gestational age be observed in a car seat prior to discharge from the hospital. The AAP did not give specific guidelines on type of car seat, length of testing, equipment, or personnel proficiency, however. Few nurseries have standard policies to evaluate car seats, to teach parents about car seats, or to position newborns in them, and not all hospitals actually conduct car seat challenges or have common standards for testing that is performed.

  3. [Single or double moral standards? Professional ethics of psychiatrists regarding self-determination, rights of third parties and involuntary treatment].

    PubMed

    Pollmächer, T

    2015-09-01

    The current intensive discussion on the legal and moral aspects of involuntary treatment of psychiatric patients raises a number of ethical issues. Physicians are unambiguously obligated to protect patient welfare and autonomy; however, in psychiatric patients disease-related restrictions in the capacity of self-determination and behaviors endangering the rights of third parties can seriously challenge this unambiguity. Therefore, psychiatry is assumed to have a double function and is also obligated to third parties and to society in general. Acceptance of such a kind of double obligation carries the risk of double moral standards, placing the psychiatrist ethically outside the community of physicians and questioning the unrestricted obligation towards the patient. The present article formulates a moral position, which places the psychiatrist, like all other physicians, exclusively on the side of the patient in terms of professional ethics and discusses the practical problems arising from this moral position.

  4. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  5. Are field quanta real objects? Some remarks on the ontology of quantum field theory

    NASA Astrophysics Data System (ADS)

    Bigaj, Tomasz

    2018-05-01

    One of the key philosophical questions regarding quantum field theory is whether it should be given a particle or field interpretation. The particle interpretation of QFT is commonly viewed as being undermined by the well-known no-go results, such as the Malament, Reeh-Schlieder and Hegerfeldt theorems. These theorems all focus on the localizability problem within the relativistic framework. In this paper I would like to go back to the basics and ask the simple-minded question of how the notion of quanta appears in the standard procedure of field quantization, starting with the elementary case of the finite numbers of harmonic oscillators, and proceeding to the more realistic scenario of continuous fields with infinitely many degrees of freedom. I will try to argue that the way the standard formalism introduces the talk of field quanta does not justify treating them as particle-like objects with well-defined properties.

  6. A comparison of the Method of Lines to finite difference techniques in solving time-dependent partial differential equations. [with applications to Burger equation and stream function-vorticity problem

    NASA Technical Reports Server (NTRS)

    Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.

    1978-01-01

    Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.

  7. Optimizing available network resources to address questions in environmental biogeochemistry

    USGS Publications Warehouse

    Hinckley, Eve-Lyn; Suzanne Andersen,; Baron, Jill S.; Peter Blanken,; Gordon Bonan,; William Bowman,; Sarah Elmendorf,; Fierer, Noah; Andrew Fox,; Keli Goodman,; Katherine Jones,; Danica Lombardozzi,; Claire Lunch,; Jason Neff,; Michael SanClements,; Katherine Suding,; Will Wieder,

    2016-01-01

    An increasing number of network observatories have been established globally to collect long-term biogeochemical data at multiple spatial and temporal scales. Although many outstanding questions in biogeochemistry would benefit from network science, the ability of the earth- and environmental-sciences community to conduct synthesis studies within and across networks is limited and seldom done satisfactorily. We identify the ideal characteristics of networks, common problems with using data, and key improvements to strengthen intra- and internetwork compatibility. We suggest that targeted improvements to existing networks should include promoting standardization in data collection, developing incentives to promote rapid data release to the public, and increasing the ability of investigators to conduct their own studies across sites. Internetwork efforts should include identifying a standard measurement suite—we propose profiles of plant canopy and soil properties—and an online, searchable data portal that connects network, investigator-led, and citizen-science projects.

  8. [Organisational problems in hospitals as risk factors].

    PubMed

    Jansen, Christoph

    2008-01-01

    The organisational responsibility in a hospital lies with the individual who is actually (co-) responsible for the error (for example, the senior consultant, medical director, nursing manager, administrative director or manager of a hospital). According to the Federal Court of Justice (BGH), staff shortages are no excuse for the failure to adhere to the standard of care. According to a judgement of the Labour Court in Wilhelmshaven the Senior Consultant of a hospital is entitled to be provided with the necessary number of staff by the hospital owner who is obliged to provide a round-the-clock specialist care standard. Care should be taken that no employees be deployed who are overtired from working the previous night shift. Timely information of the follow-up physician about therapeutic issues resulting from the hospital treatment is demanded. Risk prevention strategies developed by an expert group as a form of risk management are reasonable and also requested by some liability insurances.

  9. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  10. Residential ventilation in the United Kingdom: An overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolliscroft, M.

    1997-12-31

    This paper describes the background to residential ventilation in the U.K. and its origin in the character of the housing stock, predominantly single-family dwellings and usually terraced or semi-detached but with an increasing proportion of detached houses. Houses in the U.K. have traditionally been leaky by international standards, except by comparison with houses in parts of the US. Current data and trends are presented. Inside temperatures have generally been low by international standards (again recent data are presented), which, combined with high absolute humidity, has led to a major problem of condensation and mold, with the latter affecting several millionmore » dwellings or 17% of the total stock. Thirty-five percent of dwellings are affected by condensation. Residential ventilation in recent years in the U.K. has been largely directed toward this problem. Earlier, when much of the existing stock was actually built, the use of coal fires and leaky dwellings overcame these problems but created other problems. A comparison is made of fuel costs and indoor air temperatures between the U.K. and a number of other countries, and the consequences for the choice of residential ventilation systems are considered. Recent changes in U.K. building regulations are described concerning both ventilation (e.g., extract ventilation from wet areas both active and passive) and insulation and airtightness, and some evidence from the English House Condition Survey (EHCS) and other research on the effects of these changes is presented. Increasing concern about other pollutants--notably nitrogen dioxide (NO{sub 2}), carbon dioxide (CO), and dust mites--is described together with the consequences for combustion appliances, for example. Future problems due to tighter, more highly insulated houses are considered. Some interesting new developments are also considered, such as through-the-wall combined supply and extract units with heat recovery.« less

  11. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  13. MORE: mixed optimization for reverse engineering--an application to modeling biological networks response via sparse systems of nonlinear differential equations.

    PubMed

    Sambo, Francesco; de Oca, Marco A Montes; Di Camillo, Barbara; Toffolo, Gianna; Stützle, Thomas

    2012-01-01

    Reverse engineering is the problem of inferring the structure of a network of interactions between biological variables from a set of observations. In this paper, we propose an optimization algorithm, called MORE, for the reverse engineering of biological networks from time series data. The model inferred by MORE is a sparse system of nonlinear differential equations, complex enough to realistically describe the dynamics of a biological system. MORE tackles separately the discrete component of the problem, the determination of the biological network topology, and the continuous component of the problem, the strength of the interactions. This approach allows us both to enforce system sparsity, by globally constraining the number of edges, and to integrate a priori information about the structure of the underlying interaction network. Experimental results on simulated and real-world networks show that the mixed discrete/continuous optimization approach of MORE significantly outperforms standard continuous optimization and that MORE is competitive with the state of the art in terms of accuracy of the inferred networks.

  14. Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.

    2012-06-21

    We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less

  15. Convergence of the standard RLS method and UDUT factorisation of covariance matrix for solving the algebraic Riccati equation of the DLQR via heuristic approximate dynamic programming

    NASA Astrophysics Data System (ADS)

    Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.

    2015-08-01

    The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.

  16. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  17. Payload and General Support Computer (PGSC) Detailed Test Objective (DTO) number 795 postflight report: STS-41

    NASA Technical Reports Server (NTRS)

    Adolf, Jurine A.; Beberness, Benjamin J.; Holden, Kritina L.

    1991-01-01

    Since 1983, the Space Transportation System (STS) had routinely flown the GRiD 1139 (80286) laptop computer as a portable onboard computing resource. In the spring of 1988, the GRiD 1530, an 80386 based machine, was chosen to replace the GRiD 1139. Human factors ground evaluations and detailed test objectives (DTO) examined the usability of the available display types under different lighting conditions and various angle deviations. All proved unsuitable due to either flight qualification of usability problems. In 1990, an Electroluminescent (EL) display for the GRiD 1530 became flight qualified and another DTO was undertaken to examine this display on-orbit. Under conditions of indirect sunlight and low ambient light, the readability of the text and graphics was only limited by the observer's distance from the display. Although a problem of direct sunlight viewing still existed, there were no problems with large angular deviations nor dark adaptation. No further evaluations were deemed necessary. The GRiD 1530 with the EL display was accepted by the STS program as the new standard for the PGSC.

  18. Example Problems in LES Combustion

    DTIC Science & Technology

    2016-09-26

    AFRL-RW-EG-TP-2016-002 Example Problems in LES Combustion Douglas V. Nance Air Force Research Laboratory Munitions...AIR FORCE RESEARCH LABORATORY MUNITIONS DIRECTORATE  Air Force...4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Example Problem in LES Combustion 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  19. Authentication: A Standard Problem or a Problem of Standards?

    PubMed

    Capes-Davis, Amanda; Neve, Richard M

    2016-06-01

    Reproducibility and transparency in biomedical sciences have been called into question, and scientists have been found wanting as a result. Putting aside deliberate fraud, there is evidence that a major contributor to lack of reproducibility is insufficient quality assurance of reagents used in preclinical research. Cell lines are widely used in biomedical research to understand fundamental biological processes and disease states, yet most researchers do not perform a simple, affordable test to authenticate these key resources. Here, we provide a synopsis of the problems we face and how standards can contribute to an achievable solution.

  20. Crowdsourced Mapping - Letting Amateurs Into the Temple?

    NASA Astrophysics Data System (ADS)

    McCullagh, M.; Jackson, M.

    2013-05-01

    The rise of crowdsourced mapping data is well documented and attempts to integrate such information within existing or potential NSDIs [National Spatial Data Infrastructures] are increasingly being examined. The results of these experiments, however, have been mixed and have left many researchers uncertain and unclear of the benefits of integration and of solutions to problems of use for such combined and potentially synergistic mapping tools. This paper reviews the development of the crowdsource mapping movement and discusses the applications that have been developed and some of the successes achieved thus far. It also describes the problems of integration and ways of estimating success, based partly on a number of on-going studies at the University of Nottingham that look at different aspects of the integration problem: iterative improvement of crowdsource data quality, comparison between crowdsourced data and prior knowledge and models, development of trust in such data, and the alignment of variant ontologies. Questions of quality arise, particularly when crowdsource data are combined with pre-existing NSDI data. The latter is usually stable, meets international standards and often provides national coverage for use at a variety of scales. The former is often partial, without defined quality standards, patchy in coverage, but frequently addresses themes very important to some grass roots group and often to society as a whole. This group might be of regional, national, or international importance that needs a mapping facility to express its views, and therefore should combine with local NSDI initiatives to provide valid mapping. Will both groups use ISO (International Organisation for Standardisation) and OGC (Open Geospatial Consortium) standards? Or might some extension or relaxation be required to accommodate the mostly less rigorous crowdsourced data? So, can crowdsourced data ever be safely and successfully merged into an NSDI? Should it be simply a separate mapping layer? Is full integration possible providing quality standards are fully met, and methods of defining levels of quality agreed? Frequently crowdsourced data sets are anarchic in composition, and based on new and sometimes unproved technologies. Can an NSDI exhibit the necessary flexibility and speed to deal with such rapid technological and societal change?

  1. Bin Packing, Number Balancing, and Rescaling Linear Programs

    NASA Astrophysics Data System (ADS)

    Hoberg, Rebecca

    This thesis deals with several important algorithmic questions using techniques from diverse areas including discrepancy theory, machine learning and lattice theory. In Chapter 2, we construct an improved approximation algorithm for a classical NP-complete problem, the bin packing problem. In this problem, the goal is to pack items of sizes si ∈ [0,1] into as few bins as possible, where a set of items fits into a bin provided the sum of the item sizes is at most one. We give a polynomial-time rounding scheme for a standard linear programming relaxation of the problem, yielding a packing that uses at most OPT + O(log OPT) bins. This makes progress towards one of the "10 open problems in approximation algorithms" stated in the book of Shmoys and Williamson. In fact, based on related combinatorial lower bounds, Rothvoss conjectures that theta(logOPT) may be a tight bound on the additive integrality gap of this LP relaxation. In Chapter 3, we give a new polynomial-time algorithm for linear programming. Our algorithm is based on the multiplicative weights update (MWU) method, which is a general framework that is currently of great interest in theoretical computer science. An algorithm for linear programming based on MWU was known previously, but was not polynomial time--we remedy this by alternating between a MWU phase and a rescaling phase. The rescaling methods we introduce improve upon previous methods by reducing the number of iterations needed until one can rescale, and they can be used for any algorithm with a similar rescaling structure. Finally, we note that the MWU phase of the algorithm has a simple interpretation as gradient descent of a particular potential function, and we show we can speed up this phase by walking in a direction that decreases both the potential function and its gradient. In Chapter 4, we show that an approximate oracle for Minkowski's Theorem gives an approximate oracle for the number balancing problem, and conversely. Number balancing is the problem of minimizing | 〈a,x〉 | over x ∈ {-1,0,1}n \\ { 0}, given a ∈ [0,1]n. While an application of the pigeonhole principle shows that there always exists x with | 〈a,x〉| ≤ O(√ n/2n), the best known algorithm only guarantees |〈a,x〉| ≤ 2-ntheta(log n). We show that an oracle for Minkowski's Theorem with approximation factor rho would give an algorithm for NBP that guarantees | 〈a,x〉 | ≤ 2-ntheta(1/rho). In particular, this would beat the bound of Karmarkar and Karp provided rho ≤ O(logn/loglogn). In the other direction, we prove that any polynomial time algorithm for NBP that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  2. Medical professionalism of foreign-born and foreign-trained physicians under close scrutiny: A qualitative study with stakeholders in Germany.

    PubMed

    Klingler, Corinna; Ismail, Fatiha; Marckmann, Georg; Kuehlmeyer, Katja

    2018-01-01

    Hospitals in Germany employ increasing numbers of foreign-born and foreign-trained (FB&FT) physicians. Studies have investigated how FB&FT physicians experience their professional integration into the German healthcare system, however, the perspectives of stakeholders working with and shaping the work experiences of FB&FT physicians in German hospitals have so far been neglected. This study explores relevant stakeholders' opinions and attitudes towards FB&FT physicians-which likely influence how these physicians settle in-and how these opinions were formed. We conducted a qualitative interview study with 25 stakeholders working in hospitals or in health policy development. The interviews were analyzed within a constructivist research paradigm using methods derived from Grounded Theory (situational analysis as well as open, axial and selective coding). We found that stakeholders tended to focus on problems in FB&FT physicians' work performance. Participants criticized FB&FT physicians' work for deviating from presumably shared professional standards (skill or knowledge and behavioral standards). The professional standards invoked to justify problem-focused statements comprised the definition of an ideal behavior, attitude or ability and a tolerance range that was adapted in a dynamic process. Behavior falling outside the tolerance range was criticized as unacceptable, requiring action to prevent similar deviations in the future. Furthermore, we derived three strategies (minimization, homogenization and quality management) proposed by participants to manage deviations from assumed professional standards by FB&FT physicians. We critically reflect on the social processes of evaluation and problematization and question the legitimacy of professional standards invoked. We also discuss discriminatory tendencies visible in evaluative statements of some participants as well as in some of the strategies proposed. We suggest it will be key to develop and implement better support strategies for FB&FT physicians while also addressing problematic attitudes within the receiving system to further professional integration.

  3. Medical professionalism of foreign-born and foreign-trained physicians under close scrutiny: A qualitative study with stakeholders in Germany

    PubMed Central

    Ismail, Fatiha; Marckmann, Georg; Kuehlmeyer, Katja

    2018-01-01

    Hospitals in Germany employ increasing numbers of foreign-born and foreign-trained (FB&FT) physicians. Studies have investigated how FB&FT physicians experience their professional integration into the German healthcare system, however, the perspectives of stakeholders working with and shaping the work experiences of FB&FT physicians in German hospitals have so far been neglected. This study explores relevant stakeholders’ opinions and attitudes towards FB&FT physicians—which likely influence how these physicians settle in—and how these opinions were formed. We conducted a qualitative interview study with 25 stakeholders working in hospitals or in health policy development. The interviews were analyzed within a constructivist research paradigm using methods derived from Grounded Theory (situational analysis as well as open, axial and selective coding). We found that stakeholders tended to focus on problems in FB&FT physicians’ work performance. Participants criticized FB&FT physicians’ work for deviating from presumably shared professional standards (skill or knowledge and behavioral standards). The professional standards invoked to justify problem-focused statements comprised the definition of an ideal behavior, attitude or ability and a tolerance range that was adapted in a dynamic process. Behavior falling outside the tolerance range was criticized as unacceptable, requiring action to prevent similar deviations in the future. Furthermore, we derived three strategies (minimization, homogenization and quality management) proposed by participants to manage deviations from assumed professional standards by FB&FT physicians. We critically reflect on the social processes of evaluation and problematization and question the legitimacy of professional standards invoked. We also discuss discriminatory tendencies visible in evaluative statements of some participants as well as in some of the strategies proposed. We suggest it will be key to develop and implement better support strategies for FB&FT physicians while also addressing problematic attitudes within the receiving system to further professional integration. PMID:29447259

  4. Operationalizing Principle-Based Standards for Animal Welfare-Indicators for Climate Problems in Pig Houses.

    PubMed

    Vermeer, Herman M; Hopster, Hans

    2018-03-23

    The Dutch animal welfare law includes so-called principle-based standards. This means that the objective is described in abstract terms, enabling farmers to comply with the law in their own way. Principle-based standards are, however, difficult for the inspection agency to enforce because strict limits are missing. This pilot project aimed at developing indicators (measurements) to assess the climate in pig houses, thus enabling the enforcement of principle-based standards. In total, 64 farms with weaners and 32 farms with growing-finishing pigs were visited. On each farm, a set of climate-related measurements was collected in six pens. For each of these measurements, a threshold value was set, and exceeding this threshold indicated a welfare risk. Farm inspections were carried out during winter and spring, thus excluding situations with heat stress. Assessment of the variation and correlation between measurements reduced the dataset from 39 to 12 measurements. Using a principal components analysis helped to select five major measurements as warning signals. The number of exceeded thresholds per pen and per farm was calculated for both the large (12) and small (five) sets of measurements. CO₂ and NH₃ concentrations were related to the outside temperature. On colder days, there was less ventilation, and thus CO₂ and NH₃ concentrations increased. Air quality, reflected in the CO₂ and NH₃ concentrations, was associated with respiratory problems. Eye scores were positively correlated with both pig and pen fouling, and pig and pen fouling were closely related. We selected five signal indicators: CO₂, NH₃, and tail and eye score for weaners and finishers, and added ear score for weaners and pig fouling for growing-finishing pigs. The results indicate that pig farms can be ranked based on five signal indicators related to reduced animal welfare caused by climatic conditions. This approach could be adopted to other principle-based standards for pigs as well as for other species.

  5. Measuring systems of hard to get objects: problems with analysis of measurement results

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2005-02-01

    The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.

  6. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  7. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    NASA Astrophysics Data System (ADS)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  9. Genetic Algorithms for Multiple-Choice Problems

    NASA Astrophysics Data System (ADS)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  10. Divide et impera: subgoaling reduces the complexity of probabilistic inference and problem solving.

    PubMed

    Maisto, Domenico; Donnarumma, Francesco; Pezzulo, Giovanni

    2015-03-06

    It has long been recognized that humans (and possibly other animals) usually break problems down into smaller and more manageable problems using subgoals. Despite a general consensus that subgoaling helps problem solving, it is still unclear what the mechanisms guiding online subgoal selection are during the solution of novel problems for which predefined solutions are not available. Under which conditions does subgoaling lead to optimal behaviour? When is subgoaling better than solving a problem from start to finish? Which is the best number and sequence of subgoals to solve a given problem? How are these subgoals selected during online inference? Here, we present a computational account of subgoaling in problem solving. Following Occam's razor, we propose that good subgoals are those that permit planning solutions and controlling behaviour using less information resources, thus yielding parsimony in inference and control. We implement this principle using approximate probabilistic inference: subgoals are selected using a sampling method that considers the descriptive complexity of the resulting sub-problems. We validate the proposed method using a standard reinforcement learning benchmark (four-rooms scenario) and show that the proposed method requires less inferential steps and permits selecting more compact control programs compared to an equivalent procedure without subgoaling. Furthermore, we show that the proposed method offers a mechanistic explanation of the neuronal dynamics found in the prefrontal cortex of monkeys that solve planning problems. Our computational framework provides a novel integrative perspective on subgoaling and its adaptive advantages for planning, control and learning, such as for example lowering cognitive effort and working memory load. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  11. Using FDA reports to inform a classification for health information technology safety problems

    PubMed Central

    Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2011-01-01

    Objective To expand an emerging classification for problems with health information technology (HIT) using reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) database. Design HIT events submitted to MAUDE were retrieved using a standardized search strategy. Using an emerging classification with 32 categories of HIT problems, a subset of relevant events were iteratively analyzed to identify new categories. Two coders then independently classified the remaining events into one or more categories. Free-text descriptions were analyzed to identify the consequences of events. Measurements Descriptive statistics by number of reported problems per category and by consequence; inter-rater reliability analysis using the κ statistic for the major categories and consequences. Results A search of 899 768 reports from January 2008 to July 2010 yielded 1100 reports about HIT. After removing duplicate and unrelated reports, 678 reports describing 436 events remained. The authors identified four new categories to describe problems with software functionality, system configuration, interface with devices, and network configuration; the authors' classification with 32 categories of HIT problems was expanded by the addition of these four categories. Examination of the 436 events revealed 712 problems, 96% were machine-related, and 4% were problems at the human–computer interface. Almost half (46%) of the events related to hazardous circumstances. Of the 46 events (11%) associated with patient harm, four deaths were linked to HIT problems (0.9% of 436 events). Conclusions Only 0.1% of the MAUDE reports searched were related to HIT. Nevertheless, Food and Drug Administration reports did prove to be a useful new source of information about the nature of software problems and their safety implications with potential to inform strategies for safe design and implementation. PMID:21903979

  12. Space debris mitigation - engineering strategies

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hammond, M.

    The problem of space debris pollution is acknowledged to be of growing concern by space agencies, leading to recent activities in the field of space debris mitigation. A review of the current (and near-future) mitigation guidelines, handbooks, standards and licensing procedures has identified a number of areas where further work is required. In order for space debris mitigation to be implemented in spacecraft manufacture and operation, the authors suggest that debris-related criteria need to become design parameters (following the same process as applied to reliability and radiation). To meet these parameters, spacecraft manufacturers and operators will need processes (supported by design tools and databases and implementation standards). A particular aspect of debris mitigation, as compared with conventional requirements (e.g. radiation and reliability) is the current and near-future national and international regulatory framework and associated liability aspects. A framework for these implementation standards is presented, in addition to results of in-house research and development on design tools and databases (including collision avoidance in GTO and SSTO and evaluation of failure criteria on composite and aluminium structures).

  13. A Compendium on the NIST Radionuclidic Assays of the Massic Activity of 63Ni and 55Fe Solutions Used for an International Intercomparison of Liquid Scintillation Spectrometry Techniques

    PubMed Central

    Collé, R.; Zimmerman, B. E.

    1997-01-01

    The National Institute of Standards and Technology recently participated in an international measurement intercomparison for 63Ni and 55Fe, which was conducted amongst principal national radionuclidic metrology laboratories. The intercomparison was sponsored by EUROMET, and was primarily intended to evaluate the capabilities of liquid scintillation (LS) spectrometry techniques for standardizing nuclides that decay by low-energy β-emission (like 63Ni) and by low-Z (atomic number) electron capture (like 55Fe). The intercomparison findings exhibit a very good agreement for 63Ni amongst the various participating laboratories, including that for NIST, which suggests that the presently invoked LS methodologies are very capable of providing internationally-compatible standardizations for low-energy β-emitters. The results for 55Fe are in considerably poorer agreement, and demonstrated the existence of several unresolved problems. It has thus become apparent that there is a need for the various international laboratories to conduct rigorous, systematic evaluations of their LS capabilities in assaying radionuclides that decay by low-Z electron capture. PMID:27805141

  14. Australians are not equally protected from industrial air pollution

    NASA Astrophysics Data System (ADS)

    Dobbie, B.; Green, D.

    2015-05-01

    Australian air pollution standards are set at national and state levels for a number of chemicals harmful to human health. However, these standards do not need to be met when ad hoc pollution licences are issued by state environment agencies. This situation results in a highly unequal distribution of air pollution between towns and cities, and across the country. This paper examines these pollution regulations through two case studies, specifically considering the ability of the regulatory regime to protect human health from lead and sulphur dioxide pollution in the communities located around smelters. It also considers how the proposed National Clean Air Agreement, once enacted, might serve to reduce this pollution equity problem. Through the case studies we show that there are at least three discrete concerns relating to the current licencing system. They are: non-onerous emission thresholds for polluting industry; temporal averaging thresholds masking emission spikes; and ineffective penalties for breaching licence agreements. In conclusion, we propose a set of new, legally-binding national minimum standards for industrial air pollutants must be developed and enforced, which can only be modified by more (not less) stringent state licence arrangements.

  15. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  16. An efficient approach to improve the usability of e-learning resources: the role of heuristic evaluation.

    PubMed

    Davids, Mogamat Razeen; Chikte, Usuf M E; Halperin, Mitchell L

    2013-09-01

    Optimizing the usability of e-learning materials is necessary to maximize their potential educational impact, but this is often neglected when time and other resources are limited, leading to the release of materials that cannot deliver the desired learning outcomes. As clinician-teachers in a resource-constrained environment, we investigated whether heuristic evaluation of our multimedia e-learning resource by a panel of experts would be an effective and efficient alternative to testing with end users. We engaged six inspectors, whose expertise included usability, e-learning, instructional design, medical informatics, and the content area of nephrology. They applied a set of commonly used heuristics to identify usability problems, assigning severity scores to each problem. The identification of serious problems was compared with problems previously found by user testing. The panel completed their evaluations within 1 wk and identified a total of 22 distinct usability problems, 11 of which were considered serious. The problems violated the heuristics of visibility of system status, user control and freedom, match with the real world, intuitive visual layout, consistency and conformity to standards, aesthetic and minimalist design, error prevention and tolerance, and help and documentation. Compared with user testing, heuristic evaluation found most, but not all, of the serious problems. Combining heuristic evaluation and user testing, with each involving a small number of participants, may be an effective and efficient way of improving the usability of e-learning materials. Heuristic evaluation should ideally be used first to identify the most obvious problems and, once these are fixed, should be followed by testing with typical end users.

  17. Effect of Transitioning from Standard Reference Material 2806a to Standard Reference Material 2806b for Light Obscuration Particle Countering

    DTIC Science & Technology

    2016-04-01

    Reference Material 2806b for Light Obscuration Particle Countering April 2016 UNCLASSIFIED UNCLASSIFIED Joel Schmitigal 27809 Standard Form 298 (Rev...Standard Reference Material 2806b for Light Obscuration Particle Countering 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Reference Material 2806a to Standard Reference Material 2806b for Light Obscuration Particle Countering Joel Schmitigal Force Projection

  18. Qualitative Differences in Real-Time Solution of Standardized Figural Analogies.

    ERIC Educational Resources Information Center

    Schiano, Diane J.; And Others

    Performance on standardized figural analogy tests is considered highly predictive of academic success. While information-processing models of analogy solution attribute performance differences to quantitative differences in processing parameters, the problem-solving literature suggests that qualitative differences in problem representation and…

  19. Assembling Appliances Standards from a Basket of Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siderious, Hans-Paul; Meier, Alan

    2014-08-11

    Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less

  20. [Ethical aspects of pharmacological cognition enhancement and the use of psychostimulants by children and young persons].

    PubMed

    Walcher-Andris, Elfriede

    2006-03-01

    Pharmacological cognition enhancement aims at an improvement of cognitive activity and performance in healthy people by means of appropriate drugs. Ethical implications of this kind of cognition enhancement stand in need of reflection. For a number of reasons, the distinction between treatment and enhancement is fuzzy with regard to Attention Deficit Hyperactivity Disorder (ADHD). In consideration of the growing number methylphenidate prescriptions, one question addressed in this article is whether or not psychostimulants are used not only for therapy but also for cognitive enhancement by children and young people. The possibility of a "grey zone" between treatment and enhancement seems to open the field for medicalization of social and pedagogical problems as well as for "hidden enhancement." In clinical practice, the use of stimulants is associated with certain ethical problems concerning diagnosis, treatment and prevention of ADHD. Some of these problems are associated with the possibility of cognition enhancement. In order to evaluate ethical problems of pharmacological cognition enhancement, short-term and long-term consequences of stimulant use need to be taken into account. This refers to the level of transmitter balance in the learning process, to the level of individual learning strategies as well as to the level of interaction. This raises the question (1) of how well adapted the means of enhancement are with regard to the end of a comprehensive education and socialization, and (2) whether there are justifiable limits to the standardization of behavior and knowledge. (3) Moreover, stipulating an autonomous decision as a minimum prerequisite for legitimate cognition enhancement seems inadequate in the case of children and young persons. Considering the evidence and the many open questions associated with pharmacological cognition enhancement for children and young persons, it is concluded that it is indeed a morally problematic technique.

  1. Number of Psychosocial Strengths Predicts Reduced HIV Sexual Risk Behaviors Above and Beyond Syndemic Problems Among Gay and Bisexual Men.

    PubMed

    Hart, Trevor A; Noor, Syed W; Adam, Barry D; Vernon, Julia R G; Brennan, David J; Gardner, Sandra; Husbands, Winston; Myers, Ted

    2017-10-01

    Syndemics research shows the additive effect of psychosocial problems on high-risk sexual behavior among gay and bisexual men (GBM). Psychosocial strengths may predict less engagement in high-risk sexual behavior. In a study of 470 ethnically diverse HIV-negative GBM, regression models were computed using number of syndemic psychosocial problems, number of psychosocial strengths, and serodiscordant condomless anal sex (CAS). The number of syndemic psychosocial problems correlated with serodiscordant CAS (RR = 1.51, 95% CI 1.18-1.92; p = 0.001). When adding the number of psychosocial strengths to the model, the effect of syndemic psychosocial problems became non-significant, but the number of strengths-based factors remained significant (RR = 0.67, 95% CI 0.53-0.86; p = 0.002). Psychosocial strengths may operate additively in the same way as syndemic psychosocial problems, but in the opposite direction. Consistent with theories of resilience, psychosocial strengths may be an important set of variables predicting sexual risk behavior that is largely missing from the current HIV behavioral literature.

  2. 77 FR 9239 - California State Motor Vehicle and Nonroad Engine Pollution Control Standards; Truck Idling...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... Pollution Control Standards; Truck Idling Requirements; Notice of Decision AGENCY: Environmental Protection... to meet its serious air pollution problems. Likewise, EPA has consistently recognized that California... and high concentrations of automobiles, create serious pollution problems.'' \\37\\ Furthermore, no...

  3. Learning to Write about Mathematics

    ERIC Educational Resources Information Center

    Parker, Renee; Breyfogle, M. Lynn

    2011-01-01

    Beginning in third grade, Pennsylvania students are required to take the Pennsylvania State Standardized Assessment (PSSA), which presents multiple-choice mathematics questions and open-ended mathematics problems. Consistent with the Communication Standard of the National Council of Teachers of Mathematics, while solving the open-ended problems,…

  4. A Successful Senior Seminar: Unsolved Problems in Number Theory

    ERIC Educational Resources Information Center

    Styer, Robert

    2014-01-01

    The "Unsolved Problems in Number Theory" book by Richard Guy provides nice problems suitable for a typical math major. We give examples of problems that have worked well in our senior seminar course and some nice results that senior math majors can obtain.

  5. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  6. Domain General Mediators of the Relation between Kindergarten Number Sense and First-Grade Mathematics Achievement

    PubMed Central

    Hassinger-Das, Brenna; Jordan, Nancy C.; Glutting, Joseph; Irwin, Casey; Dyson, Nancy

    2013-01-01

    Domain general skills that mediate the relation between kindergarten number sense and first-grade mathematics skills were investigated. Participants were 107 children who displayed low number sense in the fall of kindergarten. Controlling for background variables, multiple regression analyses showed that attention problems and executive functioning both were unique predictors of mathematics outcomes. Attention problems were more important for predicting first-grade calculation performance while executive functioning was more important for predicting first-grade performance on applied problems. Moreover, both executive functioning and attention problems were unique partial mediators of the relationship between kindergarten and first-grade mathematics skills. The results provide empirical support for developing interventions that target executive functioning and attention problems in addition to instruction in number skills for kindergartners with initial low number sense. PMID:24237789

  7. Domain-general mediators of the relation between kindergarten number sense and first-grade mathematics achievement.

    PubMed

    Hassinger-Das, Brenna; Jordan, Nancy C; Glutting, Joseph; Irwin, Casey; Dyson, Nancy

    2014-02-01

    Domain-general skills that mediate the relation between kindergarten number sense and first-grade mathematics skills were investigated. Participants were 107 children who displayed low number sense in the fall of kindergarten. Controlling for background variables, multiple regression analyses showed that both attention problems and executive functioning were unique predictors of mathematics outcomes. Attention problems were more important for predicting first-grade calculation performance, whereas executive functioning was more important for predicting first-grade performance on applied problems. Moreover, both executive functioning and attention problems were unique partial mediators of the relationship between kindergarten and first-grade mathematics skills. The results provide empirical support for developing interventions that target executive functioning and attention problems in addition to instruction in number skills for kindergartners with initial low number sense. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Sentinel Lymph Node Biopsy in Uterine Cervical Cancer Patients: Ready for Clinical Use? A Review of the Literature

    PubMed Central

    Palla, Viktoria-Varvara; Karaolanis, Georgios; Moris, Demetrios; Antsaklis, Aristides

    2014-01-01

    Sentinel lymph node biopsy has been widely studied in a number of cancer types. As far as cervical cancer is concerned, this technique has already been used, revealing both positive results and several issues to be solved. The debate on the role of sentinel lymph node biopsy in cervical cancer is still open although most of the studies have already revealed its superiority over complete lymphadenectomy and the best handling possible of the emerging practical problems. Further research should be made in order to standardize this method and include it in the clinical routine. PMID:24527233

  9. Diagnosis of Electric Submersible Centrifugal Pump

    NASA Astrophysics Data System (ADS)

    Kovalchuk, M. S.; Poddubniy, D. A.

    2018-01-01

    The paper deals with the development of system operational diagnostics of electrical submersible pumps (ESP). At the initial stage of studies have explored current methods of the diagnosis of ESP, examined the existing problems of their diagnosis. Resulting identified a number of main standard ESP faults, mechanical faults such as bearing wear, protective sleeves of the shaft and the hubs of guide vanes, misalignment and imbalance of the shafts, which causes the breakdown of the stator bottom or top bases. All this leads to electromagnetic faults: rotor eccentricity, weakening the pressing of steel packs, wire breakage or a short circuit in the stator winding, etc., leading to changes in the consumption current.

  10. Solar cell circuit and method for manufacturing solar cells

    NASA Technical Reports Server (NTRS)

    Mardesich, Nick (Inventor)

    2010-01-01

    The invention is a novel manufacturing method for making multi-junction solar cell circuits that addresses current problems associated with such circuits by allowing the formation of integral diodes in the cells and allows for a large number of circuits to readily be placed on a single silicon wafer substrate. The standard Ge wafer used as the base for multi-junction solar cells is replaced with a thinner layer of Ge or a II-V semiconductor material on a silicon/silicon dioxide substrate. This allows high-voltage cells with multiple multi-junction circuits to be manufactured on a single wafer, resulting in less array assembly mass and simplified power management.

  11. First results from KamLAND: evidence for reactor antineutrino disappearance.

    PubMed

    Eguchi, K; Enomoto, S; Furuno, K; Goldman, J; Hanada, H; Ikeda, H; Ikeda, K; Inoue, K; Ishihara, K; Itoh, W; Iwamoto, T; Kawaguchi, T; Kawashima, T; Kinoshita, H; Kishimoto, Y; Koga, M; Koseki, Y; Maeda, T; Mitsui, T; Motoki, M; Nakajima, K; Nakajima, M; Nakajima, T; Ogawa, H; Owada, K; Sakabe, T; Shimizu, I; Shirai, J; Suekane, F; Suzuki, A; Tada, K; Tajima, O; Takayama, T; Tamae, K; Watanabe, H; Busenitz, J; Djurcic, Z; McKinny, K; Mei, D-M; Piepke, A; Yakushev, E; Berger, B E; Chan, Y D; Decowski, M P; Dwyer, D A; Freedman, S J; Fu, Y; Fujikawa, B K; Heeger, K M; Lesko, K T; Luk, K-B; Murayama, H; Nygren, D R; Okada, C E; Poon, A W P; Steiner, H M; Winslow, L A; Horton-Smith, G A; McKeown, R D; Ritter, J; Tipton, B; Vogel, P; Lane, C E; Miletic, T; Gorham, P W; Guillian, G; Learned, J G; Maricic, J; Matsuno, S; Pakvasa, S; Dazeley, S; Hatakeyama, S; Murakami, M; Svoboda, R C; Dieterle, B D; DiMauro, M; Detwiler, J; Gratta, G; Ishii, K; Tolich, N; Uchida, Y; Batygov, M; Bugg, W; Cohn, H; Efremenko, Y; Kamyshkov, Y; Kozlov, A; Nakamura, Y; De Braeckeleer, L; Gould, C R; Karwowski, H J; Markoff, D M; Messimore, J A; Nakamura, K; Rohm, R M; Tornow, W; Young, A R; Wang, Y-F

    2003-01-17

    KamLAND has measured the flux of nu;(e)'s from distant nuclear reactors. We find fewer nu;(e) events than expected from standard assumptions about nu;(e) propagation at the 99.95% C.L. In a 162 ton.yr exposure the ratio of the observed inverse beta-decay events to the expected number without nu;(e) disappearance is 0.611+/-0.085(stat)+/-0.041(syst) for nu;(e) energies >3.4 MeV. In the context of two-flavor neutrino oscillations with CPT invariance, all solutions to the solar neutrino problem except for the "large mixing angle" region are excluded.

  12. Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem

    NASA Astrophysics Data System (ADS)

    Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.

    2017-03-01

    Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.

  13. Los Alamos Science, Number 25 -- 1997: Celebrating the Neutrino

    DOE R&D Accomplishments Database

    Cooper, N. G. ed.

    1997-01-01

    This issue is devoted to the neutrino and its remaining mysteries. It is divided into the following areas: (1) The Reines-Cowan experiment -- detecting the poltergeist; (2) The oscillating neutrino -- an introduction to neutrino masses and mixing; (3) A brief history of neutrino experiments at LAMPF; (4) A thousand eyes -- the story of LSND (Los Alamos neutrino oscillation experiment); (5) The evidence for oscillations; (6) The nature of neutrinos in muon decay and physics beyond the Standard Model; (7) Exorcising ghosts -- in pursuit of the missing solar neutrinos; (8) MSW -- a possible solution to the solar neutrino problem; (8) Neutrinos and supernovae; and (9) Dark matter and massive neutrinos.

  14. On metric structure of ultrametric spaces

    NASA Astrophysics Data System (ADS)

    Nechaev, S. K.; Vasilyev, O. A.

    2004-03-01

    In our work we have reconsidered the old problem of diffusion at the boundary of an ultrametric tree from a 'number theoretic' point of view. Namely, we use the modular functions (in particular, the Dedekind eegr-function) to construct the 'continuous' analogue of the Cayley tree isometrically embedded in the Poincaré upper half-plane. Later we work with this continuous Cayley tree as with a standard function of a complex variable. In the framework of our approach, the results of Ogielsky and Stein on dynamics in ultrametric spaces are reproduced semi-analytically or semi-numerically. The speculation on the new 'geometrical' interpretation of replica n rarr 0 limit is proposed.

  15. Formation of power management strategy at the industrial enterprises

    NASA Astrophysics Data System (ADS)

    Akimova, Elena

    2017-10-01

    The article is dedicated to energy efficiency problems. The main recommendations about the development of the system of strategic power management at the industrial enterprise offered in the research include a number of the principles, aimed at the increase of the importance of human resources in information-and-analytical and innovative functions of power management. According to the results of the current situation analyses, the author suggests using some specific indicators of human resources, as they can contribute to the energy efficiency formation. The system of standardization is considered to be the basis for the implementation of strategic power management at the enterprises.

  16. Topological numbering of features on a mesh

    NASA Technical Reports Server (NTRS)

    Atallah, Mikhail J.; Hambrusch, Susanne E.; Tewinkel, Lynn E.

    1988-01-01

    Assume a nxn binary image is given containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. The problem of assigning topological numbers to such features is considered; i.e., assign a number to every feature f so that all features to the left of f have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. A parallel algorithm to solve the topological numbering problem in O(n) time on an nxn mesh of processors is presented. The key idea of the solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the to the left of relationship of the features.

  17. Dark energy from the string axiverse.

    PubMed

    Kamionkowski, Marc; Pradler, Josef; Walker, Devin G E

    2014-12-19

    String theories suggest the existence of a plethora of axionlike fields with masses spread over a huge number of decades. Here, we show that these ideas lend themselves to a model of quintessence with no super-Planckian field excursions and in which all dimensionless numbers are order unity. The scenario addresses the "Why now?" problem-i.e., Why has accelerated expansion begun only recently?-by suggesting that the onset of dark-energy domination occurs randomly with a slowly decreasing probability per unit logarithmic interval in cosmic time. The standard axion potential requires us to postulate a rapid decay of most of the axion fields that do not become dark energy. The need for these decays is averted, though, with the introduction of a slightly modified axion potential. In either case, a universe like ours arises in roughly 1 in 100 universes. The scenario may have a host of observable consequences.

  18. Avionics Tether Operations Control

    NASA Technical Reports Server (NTRS)

    Glaese, John R.

    2001-01-01

    The activities described in this Final Report were authorized and performed under Purchase Order Number H32835D, issued as part of NASA contract number NAS8-00114. The period of performance of this PO was from March 1 to September 30, 2001. The primary work activity was the continued development and updating of the tether dynamic simulation tools GTOSS (Generalized Tethered Object System Simulation) and TSSIM (Tethered Satellite System) and use of these and other tools in the analysis of various tether dynamics problems. Several updated versions of GTOSS were delivered during the period of performance by the author of the simulation, Lang Associates' David Lang. These updates had mainly to do with updated documentation and an updated coordinate system definition to the J2000 standards. This Final Report is organized by the months in which the activities described were performed. The following sections review the Statement of Work (SOW) and activities performed to satisfy it.

  19. Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Luo, Li-Shi

    2007-01-01

    In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.

  20. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  1. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives

    PubMed Central

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-01-01

    Abstract Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). PMID:27097747

  2. Advancing Neurologic Care in the Neonatal Intensive Care Unit with a Neonatal Neurologist

    PubMed Central

    Mulkey, Sarah B.; Swearingen, Christopher J.

    2014-01-01

    Neonatal neurology is a growing sub-specialty area. Given the considerable amount of neurologic problems present in the neonatal intensive care unit, a neurologist with expertise in neonates is becoming more important. We sought to evaluate the change in neurologic care in the neonatal intensive care unit at our tertiary care hospital by having a dedicated neonatal neurologist. The period post-neonatal neurologist showed a greater number of neurology consultations (P<0.001), number of neurology encounters per patient (P<0.001), a wider variety of diagnoses seen, and an increase in the use of video-electroencephalography (P=0.022), compared to the pre-neonatal neurologist period. The neonatologists expressed appreciation for having a dedicated neurologist available. Standardized protocols for treating hypoxic-ischemic encephalopathy and neonatal seizures were also developed. Overall, by having a neonatal neurologist, neurology became part of the multi-disciplinary team providing focused neurologic care to newborns. PMID:23271754

  3. Detection of Cheating by Decimation Algorithm

    NASA Astrophysics Data System (ADS)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  4. Dynamical Electroweak Symmetry Breaking with a Heavy Fermion in Light of Recent LHC Results

    DOE PAGES

    Hung, Pham Q.

    2013-01-01

    The recent announcement of a discovery of a possible Higgs-like particle—its spin and parity are yet to be determined—at the LHC with a mass of 126 GeV necessitates a fresh look at the nature of the electroweak symmetry breaking, in particular if this newly-discovered particle will turn out to have the quantum numbers of a Standard Model Higgs boson. Even if it were a 0 + scalar with the properties expected for a SM Higgs boson, there is still the quintessential hierarchy problem that one has to deal with and which, by itself, suggests a new physics energy scale around 1 TeV.more » This paper presents a minireview of one possible scenario: the formation of a fermion-antifermion condensate coming from a very heavy fourth generation, carrying the quantum number of the SM Higgs field, and thus breaking the electroweak symmetry.« less

  5. Research on output feedback control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Kramer, F. S.

    1985-01-01

    In designing fixed order compensators, an output feedback formulation has been adopted by suitably augmenting the system description to include the compensator states. However, the minimization of the performance index over the range of possible compensator descriptions was impeded due to the nonuniqueness of the compensator transfer function. A controller canonical form of the compensator was chosen to reduce the number of free parameters to its minimal number in the optimization. In the MIMO case, the controller form requires a prespecified set of ascending controllability indices. This constraint on the compensator structure is rather innocuous in relation to the increase in convergence rate of the optimization. Moreover, the controller form is easily relatable to a unique controller transfer function description. This structure of the compensator does not require penalizing the compensator states for a nonzero or coupled solution, a problem that occurs when following a standard output feedback synthesis formulation.

  6. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes

    PubMed Central

    March, Christopher A.; Scholl, Gretchen; Dversdal, Renee K.; Richards, Matthew; Wilson, Leah M.; Mohan, Vishnu; Gold, Jeffrey A.

    2016-01-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content. PMID:27168894

  7. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes.

    PubMed

    March, Christopher A; Scholl, Gretchen; Dversdal, Renee K; Richards, Matthew; Wilson, Leah M; Mohan, Vishnu; Gold, Jeffrey A

    2016-05-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content.

  8. Area under the curve as a novel metric of behavioral economic demand for alcohol.

    PubMed

    Amlung, Michael; Yurasek, Ali; McCarty, Kayleigh N; MacKillop, James; Murphy, James G

    2015-06-01

    Behavioral economic purchase tasks can be readily used to assess demand for a number of addictive substances, including alcohol, tobacco, and illicit drugs. However, several methodological limitations associated with the techniques used to quantify demand may reduce the utility of demand measures. In the present study, we sought to introduce area under the curve (AUC), commonly used to quantify degree of delay discounting, as a novel index of demand. A sample of 207 heavy-drinking college students completed a standard alcohol purchase task and provided information about typical weekly drinking patterns and alcohol-related problems. Level of alcohol demand was quantified using AUC--which reflects the entire amount of consumption across all drink prices--as well as the standard demand indices (e.g., intensity, breakpoint, Omax, Pmax, and elasticity). Results indicated that AUC was significantly correlated with each of the other demand indices (rs = .42-.92), with particularly strong associations with Omax (r = .92). In regression models, AUC and intensity were significant predictors of weekly drinking quantity, and AUC uniquely predicted alcohol-related problems, even after controlling for drinking level. In a parallel set of analyses, Omax also predicted drinking quantity and alcohol problems, although Omax was not a unique predictor of the latter. These results offer initial support for using AUC as an index of alcohol demand. Additional research is necessary to further validate this approach and to examine its utility in quantifying demand for other addictive substances such as tobacco and illicit drugs. (c) 2015 APA, all rights reserved).

  9. Self-reported problems and wishes for plastic surgery after bariatric surgery.

    PubMed

    Wagenblast, Anne Lene; Laessoe, Line; Printzlau, Andreas

    2014-04-01

    In the affluent part of the world, there is an increasing occurrence of obesity with Body Mass Index (BMI) above 40, which has resulted in an increasing number of operations such as gastric bypass (GB). After massive weight loss there will often be a need for subsequent plastic surgical correction, since some of the patients will experience problems due to excess skin. Foreign studies estimate that ∼30% of all bariatric surgery patients will at some point seek plastic surgical correction of excess skin. The aim of this study is to investigate to what extent the GB patients themselves consider plastic surgery for removal of excess skin, and their reasons and motivations for this. The investigation was performed as an anonymous questionnaire handed out to 150 patients at the 1-year standard consultation for GB patients at a private hospital. The questionnaire contained information about demographic data, patient habits, earlier or present comorbidity, physical problems, psychological problems, and cosmetic problems due to excess skin. Also, it contained information about what anatomical area bothered the patient the most. One hundred and thirty-eight patients responded to the questionnaire, and the investigation showed that 89.9% of the patients had a wish for plastic surgery for several different reasons. This patient demand showed to have no correlation to age, gender, smoking habits, or earlier comorbidity.

  10. A comparison of university student and community gamblers: Motivations, impulsivity, and gambling cognitions

    PubMed Central

    Marmurek, Harvey H. C.; Switzer, Jessica; D’Alvise, Joshua

    2014-01-01

    Background and aims: The present study tested whether the associations among motivational, cognitive, and personality correlates of problem gambling severity differed across university student gamblers (n = 123) and gamblers in the general adult community (n = 113). Methods: The participants completed a survey that included standardized measures of gambling motivation, gambling related cognitions, and impulsivity. The survey also asked participants to report the forms of gambling in which they engaged to test whether gambling involvement (number of different forms of gambling) was related to problem gambling severity. After completing the survey, participants played roulette online to examine whether betting patterns adhered to the gambler’s fallacy. Results: Gambling involvement was significantly related to problem gambling severity for the community sample but not for the student sample. A logistic regression analysis that tested the involvement, motivation, impulsivity and cognitive correlates showed that money motivation and gambling related cognitions were the only significant independent predictors of gambling severity. Adherence to the gambler’s fallacy was stronger for students than for the community sample, and was associated with gambling related cognitions. Discussion: The motivational, impulsivity and cognitive, and correlates of problem gambling function similarly in university student gamblers and in gamblers from the general adult community. Interventions for both groups should focus on the financial and cognitive supports of problem gambling. PMID:25215214

  11. Constraints in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.

    1996-01-01

    Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.

  12. Virtual shelves in a digital library: a framework for access to networked information sources.

    PubMed Central

    Patrick, T B; Springer, G K; Mitchell, J A; Sievert, M E

    1995-01-01

    OBJECTIVE: Develop a framework for collections-based access to networked information sources that addresses the problem of location-dependent access to information sources. DESIGN: This framework uses a metaphor of a virtual shelf. A virtual shelf is a general-purpose server that is dedicated to a particular information subject class. The identifier of one of these servers identifies its subject class. Location-independent call numbers are assigned to information sources. Call numbers are based on standard vocabulary codes. The call numbers are first mapped to the location-independent identifiers of virtual shelves. When access to an information resource is required, a location directory provides a second mapping of these location-independent server identifiers to actual network locations. RESULTS: The framework has been implemented in two different systems. One system is based on the Open System Foundation/Distributed Computing Environment and the other is based on the World Wide Web. CONCLUSIONS: This framework applies in new ways traditional methods of library classification and cataloging. It is compatible with two traditional styles of selecting information searching and browsing. Traditional methods may be combined with new paradigms of information searching that will be able to take advantage of the special properties of digital information. Cooperation between the library-informational science community and the informatics community can provide a means for a continuing application of the knowledge and techniques of library science to the new problems of networked information sources. PMID:8581554

  13. Extension of modified power method to two-dimensional problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Peng; Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan 44919; Lee, Hyunsuk

    2016-09-01

    In this study, the generalized modified power method was extended to two-dimensional problems. A direct application of the method to two-dimensional problems was shown to be unstable when the number of requested eigenmodes is larger than a certain problem dependent number. The root cause of this instability has been identified as the degeneracy of the transfer matrix. In order to resolve this instability, the number of sub-regions for the transfer matrix was increased to be larger than the number of requested eigenmodes; and a new transfer matrix was introduced accordingly which can be calculated by the least square method. Themore » stability of the new method has been successfully demonstrated with a neutron diffusion eigenvalue problem and the 2D C5G7 benchmark problem. - Graphical abstract:.« less

  14. [Development of a software standardizing optical density with operation settings related to several limitations].

    PubMed

    Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei

    2012-12-01

    To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.

  15. Adaptive macro finite elements for the numerical solution of monodomain equations in cardiac electrophysiology.

    PubMed

    Heidenreich, Elvio A; Ferrero, José M; Doblaré, Manuel; Rodríguez, José F

    2010-07-01

    Many problems in biology and engineering are governed by anisotropic reaction-diffusion equations with a very rapidly varying reaction term. This usually implies the use of very fine meshes and small time steps in order to accurately capture the propagating wave while avoiding the appearance of spurious oscillations in the wave front. This work develops a family of macro finite elements amenable for solving anisotropic reaction-diffusion equations with stiff reactive terms. The developed elements are incorporated on a semi-implicit algorithm based on operator splitting that includes adaptive time stepping for handling the stiff reactive term. A linear system is solved on each time step to update the transmembrane potential, whereas the remaining ordinary differential equations are solved uncoupled. The method allows solving the linear system on a coarser mesh thanks to the static condensation of the internal degrees of freedom (DOF) of the macroelements while maintaining the accuracy of the finer mesh. The method and algorithm have been implemented in parallel. The accuracy of the method has been tested on two- and three-dimensional examples demonstrating excellent behavior when compared to standard linear elements. The better performance and scalability of different macro finite elements against standard finite elements have been demonstrated in the simulation of a human heart and a heterogeneous two-dimensional problem with reentrant activity. Results have shown a reduction of up to four times in computational cost for the macro finite elements with respect to equivalent (same number of DOF) standard linear finite elements as well as good scalability properties.

  16. “Measure Your Gradient”: A New Way to Measure Gradients in High Performance Liquid Chromatography by Mass Spectrometric or Absorbance Detection

    PubMed Central

    Magee, Megan H.; Manulik, Joseph C.; Barnes, Brian B.; Abate-Pella, Daniel; Hewitt, Joshua T.; Boswell, Paul G.

    2014-01-01

    The gradient produced by an HPLC is never the same as the one it is programmed to produce, but non-idealities in the gradient can be taken into account if they are measured. Such measurements are routine, yet only one general approach has been described to make them: both HPLC solvents are replaced with water, solvent B is spiked with 0.1% acetone, and the gradient is measured by UV absorbance. Despite the widespread use of this procedure, we found a number of problems and complications with it, mostly stemming from the fact that it measures the gradient under abnormal conditions (e.g. both solvents are water). It is also generally not amenable to MS detection, leaving those with only an MS detector no way to accurately measure their gradients. We describe a new approach called “Measure Your Gradient” that potentially solves these problems. One runs a test mixture containing 20 standards on a standard stationary phase and enters their gradient retention times into open-source software available at www.measureyourgradient.org. The software uses the retention times to back-calculate the gradient that was truly produced by the HPLC. Here we present a preliminary investigation of the new approach. We found that gradients measured this way are comparable to those measured by a more accurate, albeit impractical, version of the conventional approach. The new procedure worked with different gradients, flow rates, column lengths, inner diameters, on two different HPLCs, and with six different batches of the standard stationary phase. PMID:25441073

  17. The classical dynamic symmetry for the U(1) -Kepler problems

    NASA Astrophysics Data System (ADS)

    Bouarroudj, Sofiane; Meng, Guowu

    2018-01-01

    For the Jordan algebra of hermitian matrices of order n ≥ 2, we let X be its submanifold consisting of rank-one semi-positive definite elements. The composition of the cotangent bundle map πX: T∗ X → X with the canonical map X → CP n - 1 (i.e., the map that sends a given hermitian matrix to its column space), pulls back the Kähler form of the Fubini-Study metric on CP n - 1 to a real closed differential two-form ωK on T∗ X. Let ωX be the canonical symplectic form on T∗ X and μ a real number. A standard fact says that ωμ ≔ωX + 2 μωK turns T∗ X into a symplectic manifold, hence a Poisson manifold with Poisson bracket {,}μ. In this article we exhibit a Poisson realization of the simple real Lie algebra su(n , n) on the Poisson manifold (T∗ X ,{,}μ) , i.e., a Lie algebra homomorphism from su(n , n) to (C∞(T∗ X , R) ,{,}μ). Consequently one obtains the Laplace-Runge-Lenz vector for the classical U(1) -Kepler problem of level n and magnetic charge μ. Since the McIntosh-Cisneros-Zwanziger-Kepler problems (MICZ-Kepler Problems) are the U(1) -Kepler problems of level 2, the work presented here is a direct generalization of the work by A. Barut and G. Bornzin (1971) on the classical dynamic symmetry for the MICZ-Kepler problems.

  18. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  19. Solving standard traveling salesman problem and multiple traveling salesman problem by using branch-and-bound

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Wan Jaafar, Wan Nurhadani; Jamil, Siti Jasmida

    2013-04-01

    The standard Traveling Salesman Problem (TSP) is the classical Traveling Salesman Problem (TSP) while Multiple Traveling Salesman Problem (MTSP) is an extension of TSP when more than one salesman is involved. The objective of MTSP is to find the least costly route that the traveling salesman problem can take if he wishes to visit exactly once each of a list of n cities and then return back to the home city. There are a few methods that can be used to solve MTSP. The objective of this research is to implement an exact method called Branch-and-Bound (B&B) algorithm. Briefly, the idea of B&B algorithm is to start with the associated Assignment Problem (AP). A branching strategy will be applied to the TSP and MTSP which is Breadth-first-Search (BFS). 11 nodes of cities are implemented for both problem and the solutions to the problem are presented.

  20. FOURTH SEMINAR TO THE MEMORY OF D.N. KLYSHKO: Algebraic solution of the synthesis problem for coded sequences

    NASA Astrophysics Data System (ADS)

    Leukhin, Anatolii N.

    2005-08-01

    The algebraic solution of a 'complex' problem of synthesis of phase-coded (PC) sequences with the zero level of side lobes of the cyclic autocorrelation function (ACF) is proposed. It is shown that the solution of the synthesis problem is connected with the existence of difference sets for a given code dimension. The problem of estimating the number of possible code combinations for a given code dimension is solved. It is pointed out that the problem of synthesis of PC sequences is related to the fundamental problems of discrete mathematics and, first of all, to a number of combinatorial problems, which can be solved, as the number factorisation problem, by algebraic methods by using the theory of Galois fields and groups.

  1. Breaking Be: a sterile neutrino solution to the cosmological lithium problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvati, L.; Melchiorri, A.; Pagano, L.

    2016-08-01

    The possibility that the so-called ''lithium problem'', i.e., the disagreement between the theoretical abundance predicted for primordial {sup 7}Li assuming standard nucleosynthesis and the value inferred from astrophysical measurements, can be solved through a non-thermal Big Bang Nucleosynthesis (BBN) mechanism has been investigated by several authors. In particular, it has been shown that the decay of a MeV-mass particle, like, e.g., a sterile neutrino, decaying after BBN not only solves the lithium problem, but also satisfies cosmological and laboratory bounds, making such a scenario worth to be investigated in further detail. In this paper, we constrain the parameters of themore » model with the combination of current data, including Planck 2015 measurements of temperature and polarization anisotropies of the Cosmic Microwave Background (CMB), FIRAS limits on CMB spectral distortions, astrophysical measurements of primordial abundances and laboratory constraints. We find that a sterile neutrino with mass M {sub S} = 4.35{sub -0.17}{sup +0.13} MeV (at 95% c.l.), a decay time τ {sub S} = 1.8{sub -1.3}{sup +2.5} · 10{sup 5} s (at 95% c.l.) and an initial density n-bar {sub S} / n-bar {sub cmb} = 1.7{sub -0.6}{sup +3.5} · 10{sup -4} (at 95% c.l.) in units of the number density of CMB photons, perfectly accounts for the difference between predicted and observed {sup 7}Li primordial abundance. This model also predicts an increase of the effective number of relativistic degrees of freedom at the time of CMB decoupling Δ N {sub eff}{sup cmb} ≡ N {sub eff}{sup cmb} -3.046 = 0.34{sub -0.14}{sup +0.16} at 95% c.l.. The required abundance of sterile neutrinos is incompatible with the standard thermal history of the Universe, but could be realized in a low reheating temperature scenario. We also provide forecasts for future experiments finding that the combination of measurements from the COrE+ and PIXIE missions will allow to significantly reduce the permitted region for the sterile lifetime and density.« less

  2. Working in Dyads and Alone: Examining Process Variables in Solving Insight Problems

    ERIC Educational Resources Information Center

    Tidikis, Viktoria; Ash, Ivan K.

    2013-01-01

    This study investigated the effects of working in dyads and their associated gender composition on performance (solution rate and time) and process variables (number of impasses, number of passed solutions, and number of problem solving suggestions and interactions) in a set of classic insight problem solving tasks. Two types of insight problems…

  3. An uncertainty analysis of air pollution externalities from road transport in Belgium in 2010.

    PubMed

    Int Panis, L; De Nocker, L; Cornelis, E; Torfs, R

    2004-12-01

    Although stricter standards for vehicles will reduce emissions to air significantly by 2010, a number of problems will remain, especially related to particulate concentrations in cities, ground-level ozone, and CO(2). To evaluate the impacts of new policy measures, tools need to be available that assess the potential benefits of these measures in terms of the vehicle fleet, fuel choice, modal choice, kilometers driven, emissions, and the impacts on public health and related external costs. The ExternE accounting framework offers the most up to date and comprehensive methodology to assess marginal external costs of energy-related pollutants. It combines emission models, air dispersion models at local and regional scales with dose-response functions and valuation rules. Vito has extended this accounting framework with data and models related to the future composition of the vehicle fleet and transportation demand to evaluate the impact of new policy proposals on air quality and aggregated (total) external costs by 2010. Special attention was given to uncertainty analysis. The uncertainty for more than 100 different parameters was combined in Monte Carlo simulations to assess the range of possible outcomes and the main drivers of these results. Although the impacts from emission standards and total fleet mileage look dominant at first, a number of other factors were found to be important as well. This includes the number of diesel vehicles, inspection and maintenance (high-emitter cars), use of air conditioning, and heavy duty transit traffic.

  4. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  5. Historical geoscientific collections - requirements on digital cataloging and problems

    NASA Astrophysics Data System (ADS)

    Ehling, A.

    2011-12-01

    The Federal Institute for Geosciences and Natural Resources maintains comprehensive geoscientific collections: the historical collections of Prussian Geological Survey in Berlin (19th and 20th century; about 2 mio specimen) and the geoscientific collections of the 20th century in Hannover (about 800.000 specimen). Nowadays, where financial support is strictly bound to efficiency and rentability on one side and the soaring (among young people - nearly exclusive) use of the web for the research, it is mandatory to provide the information about the available stock of specimen on the web. The digital cataloging has being carried out since 20 years: up to now about 40 % of the stock has been documented in 20 access-databases. The experiences of 20 years digital cataloging as well as the contact with professional users allow to formulate the requirements on a modern digital database with all accordingly problems. The main problems are different kinds of specimen: minerals, rocks, fossils, drill cores with diverging descriptions; obsolescent names of minerals, rocks and geographical sites; generations of various inventory numbers; inhomogeneous data (quantity and quality). Out of it result requirements to much, well educated manpower on the one side and an intelligent digital solution on the other side: it should have an internationally useable standard considering all the described local problems.

  6. Pareto Joint Inversion of Love and Quasi Rayleigh's waves - synthetic study

    NASA Astrophysics Data System (ADS)

    Bogacz, Adrian; Dalton, David; Danek, Tomasz; Miernik, Katarzyna; Slawinski, Michael A.

    2017-04-01

    In this contribution the specific application of Pareto joint inversion in solving geophysical problem is presented. Pareto criterion combine with Particle Swarm Optimization were used to solve geophysical inverse problems for Love and Quasi Rayleigh's waves. Basic theory of forward problem calculation for chosen surface waves is described. To avoid computational problems some simplification were made. This operation allowed foster and more straightforward calculation without lost of solution generality. According to the solving scheme restrictions, considered model must have exact two layers, elastic isotropic surface layer and elastic isotropic half space with infinite thickness. The aim of the inversion is to obain elastic parameters and model geometry using dispersion data. In calculations different case were considered, such as different number of modes for different wave types and different frequencies. Created solutions are using OpenMP standard for parallel computing, which help in reduction of computational times. The results of experimental computations are presented and commented. This research was performed in the context of The Geomechanics Project supported by Husky Energy. Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013, and by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  7. Computer Program for Calculation of Complex Chemical Equilibrium Compositions and Applications II. Users Manual and Program Description. 2; Users Manual and Program Description

    NASA Technical Reports Server (NTRS)

    McBride, Bonnie J.; Gordon, Sanford

    1996-01-01

    This users manual is the second part of a two-part report describing the NASA Lewis CEA (Chemical Equilibrium with Applications) program. The program obtains chemical equilibrium compositions of complex mixtures with applications to several types of problems. The topics presented in this manual are: (1) details for preparing input data sets; (2) a description of output tables for various types of problems; (3) the overall modular organization of the program with information on how to make modifications; (4) a description of the function of each subroutine; (5) error messages and their significance; and (6) a number of examples that illustrate various types of problems handled by CEA and that cover many of the options available in both input and output. Seven appendixes give information on the thermodynamic and thermal transport data used in CEA; some information on common variables used in or generated by the equilibrium module; and output tables for 14 example problems. The CEA program was written in ANSI standard FORTRAN 77. CEA should work on any system with sufficient storage. There are about 6300 lines in the source code, which uses about 225 kilobytes of memory. The compiled program takes about 975 kilobytes.

  8. Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

  9. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  10. Boosting standard order sets utilization through clinical decision support.

    PubMed

    Li, Haomin; Zhang, Yinsheng; Cheng, Haixia; Lu, Xudong; Duan, Huilong

    2013-01-01

    Well-designed standard order sets have the potential to integrate and coordinate care by communicating best practices through multiple disciplines, levels of care, and services. However, there are several challenges which certainly affected the benefits expected from standard order sets. To boost standard order sets utilization, a problem-oriented knowledge delivery solution was proposed in this study to facilitate access of standard order sets and evaluation of its treatment effect. In this solution, standard order sets were created along with diagnostic rule sets which can trigger a CDS-based reminder to help clinician quickly discovery hidden clinical problems and corresponding standard order sets during ordering. Those rule set also provide indicators for targeted evaluation of standard order sets during treatment. A prototype system was developed based on this solution and will be presented at Medinfo 2013.

  11. The Relationship between Students' Performance on Conventional Standardized Mathematics Assessments and Complex Mathematical Modeling Problems

    ERIC Educational Resources Information Center

    Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.

    2016-01-01

    Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…

  12. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  13. Fractional Step Like Schemes for Free Surface Problems with Thermal Coupling Using the Lagrangian PFEM

    NASA Astrophysics Data System (ADS)

    Aubry, R.; Oñate, E.; Idelsohn, S. R.

    2006-09-01

    The method presented in Aubry et al. (Comput Struc 83:1459-1475, 2005) for the solution of an incompressible viscous fluid flow with heat transfer using a fully Lagrangian description of motion is extended to three dimensions (3D) with particular emphasis on mass conservation. A modified fractional step (FS) based on the pressure Schur complement (Turek 1999), and related to the class of algebraic splittings Quarteroni et al. (Comput Methods Appl Mech Eng 188:505-526, 2000), is used and a new advantage of the splittings of the equations compared with the classical FS is highlighted for free surface problems. The temperature is semi-coupled with the displacement, which is the main variable in a Lagrangian description. Comparisons for various mesh Reynolds numbers are performed with the classical FS, an algebraic splitting and a monolithic solution, in order to illustrate the behaviour of the Uzawa operator and the mass conservation. As the classical fractional step is equivalent to one iteration of the Uzawa algorithm performed with a standard Laplacian as a preconditioner, it will behave well only in a Reynold mesh number domain where the preconditioner is efficient. Numerical results are provided to assess the superiority of the modified algebraic splitting to the classical FS.

  14. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  15. The psychological problems of north korean adolescent refugees living in South Korea.

    PubMed

    Lee, Young Mun; Shin, Ok Ja; Lim, Myung Ho

    2012-09-01

    As the number of North Korean adolescent refugees drastically increased in South Korea, there is a growing interest in them. Our study was conducted to evaluate the mental health of the North Korean adolescent refugees residing in South Korea. The subjects of this study were 102 North Korean adolescent refugees in Hangyeore middle and high School, the public educational institution for the North Korean adolescent refugees residing in South Korea, and 766 general adolescents in the same region. The Korean version of Child Behavior Check List (K-CBCL) standardized in South Korea was employed as the mental health evaluation tool. The adolescent refugees group showed a significantly different score with that of the normal control group in the K-CBCL subscales for sociality (t=29.67, p=0.000), academic performance (t=17.79, p=0.000), total social function (t=35.52, p=0.000), social withdrawal (t=18.01, p=0.000), somatic symptoms (t=28.85, p=0.000), depression/anxiety (t=13.08, p=0.000), thought problems (t=6.24, p=0.013), attention problems (t=4.14, p=0.042), internalized problems (t=26.54, p=0.000) and total problems (t=5.23, p=0.022). The mental health of the North Korean adolescent refugees was severe particularly in internalized problems when compared with that of the general adolescents in South Korea. This result indicates the need for interest in not only the behavior of the North Korean adolescent refugees but also their emotional problem.

  16. Incidence of behavior problems in toddlers and preschool children from families living in poverty.

    PubMed

    Holtz, Casey A; Fox, Robert A; Meurer, John R

    2015-01-01

    Few studies have examined the incidence of behavior problems in toddlers and preschool children from families living in poverty. The available research suggests behavior problems occur at higher rates in children living in poverty and may have long-term negative outcomes if not identified and properly treated. This study included an ethnically representative sample of 357 children, five years of age and younger, from a diverse, low-income, urban area. All families' incomes met the federal threshold for living in poverty. Behavior problems were assessed by parent report through a questionnaire specifically designed for low-income families. Boys and younger children were reported as demonstrating a higher rate of externalizing behaviors than girls and older children. The overall rate of children scoring at least one standard deviation above the sample's mean for challenging behaviors was 17.4% and was not related to the child's gender, age or ethnicity. This study also sampled children's positive behaviors, which is unique in studies of behavior problems. Gender and age were not related to the frequency of reported positive behaviors. Ethnicity did influence scores on the positive scale. African American children appeared to present their parents more difficulty on items reflecting cooperative behaviors than Caucasian or Latino children. The implications of the study are discussed based on the recognized need for universal screening of behavior problems in young children and the small number professional training programs targeting the identification and treatment of early childhood behavior problems, despite the availability of evidence-based treatment programs tailored to young children in low-income families.

  17. An Investigation of the Sequence of Catalan Numbers with Activities for Prospective Teachers.

    ERIC Educational Resources Information Center

    Koker, John; Kuenzi, Norbert J.; Oktac, Asuman; Carmony, Lowell; Leibowitz, Rochelle

    1998-01-01

    Investigates several problems with the sequences of numbers known as the Catalan numbers and the Bell numbers. Finds that the problems are appropriate for both pre- and in-service teachers, as well as students studying discrete mathematics. (Author/CCM)

  18. The problem of epistemic jurisdiction in global governance: The case of sustainability standards for biofuels.

    PubMed

    Winickoff, David E; Mondou, Matthieu

    2017-02-01

    While there is ample scholarly work on regulatory science within the state, or single-sited global institutions, there is less on its operation within complex modes of global governance that are decentered, overlapping, multi-sectorial and multi-leveled. Using a co-productionist framework, this study identifies 'epistemic jurisdiction' - the power to produce or warrant technical knowledge for a given political community, topical arena or geographical territory - as a central problem for regulatory science in complex governance. We explore these dynamics in the arena of global sustainability standards for biofuels. We select three institutional fora as sites of inquiry: the European Union's Renewable Energy Directive, the Roundtable on Sustainable Biomaterials, and the International Organization for Standardization. These cases allow us to analyze how the co-production of sustainability science responds to problems of epistemic jurisdiction in the global regulatory order. First, different problems of epistemic jurisdiction beset different standard-setting bodies, and these problems shape both the content of regulatory science and the procedures designed to make it authoritative. Second, in order to produce global regulatory science, technical bodies must manage an array of conflicting imperatives - including scientific virtue, due process and the need to recruit adoptees to perpetuate the standard. At different levels of governance, standard drafters struggle to balance loyalties to country, to company or constituency and to the larger project of internationalization. Confronted with these sometimes conflicting pressures, actors across the standards system quite self-consciously maneuver to build or retain authority for their forum through a combination of scientific adjustment and political negotiation. Third, the evidentiary demands of regulatory science in global administrative spaces are deeply affected by 1) a market for standards, in which firms and states can choose the cheapest sustainability certification, and 2) the international trade regime, in which the long shadow of WTO law exerts a powerful disciplining function.

  19. An Assessment of the Need for Standard Variable Names for Airborne Field Campaigns

    NASA Astrophysics Data System (ADS)

    Beach, A. L., III; Chen, G.; Northup, E. A.; Kusterer, J.; Quam, B. M.

    2017-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. An airborne field campaign can involve multiple aircraft and a variety of instruments. It is quite common to have different instruments/techniques measure the same parameter on one or more aircraft platforms. This creates a need to allow instrument Principal Investigators (PIs) to name their variables in a way that would distinguish them across various data sets. A lack of standardization of variables names presents a challenge for data search tools in enabling discovery of similar data across airborne studies, aircraft platforms, and instruments. This was also identified by data users as one of the top issues in data use. One effective approach for mitigating this problem is to enforce variable name standardization, which can effectively map the unique PI variable names to fixed standard names. In order to ensure consistency amongst the standard names, it will be necessary to choose them from a controlled list. However, no such list currently exists despite a number of previous efforts to establish a sufficient list of atmospheric variable names. The Atmospheric Composition Variable Standard Name Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to solicit research community feedback to create a list of standard names that are acceptable to data providers and data users This presentation will discuss the challenges and recommendations of standard variable names in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest, improve interoperability, and discoverability to a broader user community.

  20. Substance Identification Information from EPA's Substance Registry

    EPA Pesticide Factsheets

    The Substance Registry Services (SRS) is the authoritative resource for basic information about substances of interest to the U.S. EPA and its state and tribal partners. Substances, particularly chemicals, can have many valid synonyms. For example, toluene, methyl benzene, and phenyl methane, are commonly used names for the same chemical. EPA programs collect environmental data for this chemical using each of these names, plus others. This diversity leads to problems when a user is looking for programmatic data for toluene but is unaware that the data is stored under the synonym methyl benzene. For each substance, the SRS identifies the statutes, EPA programs, as well as organization external to EPA, that track or regulate that substance and the synonym used by that statute, EPA program or external organization. Besides standardized information for each chemical, such as the Chemical Abstracts Services name and the Chemical Abstracts Number and the EPA Registry Name (the EPA standard name), the SRS also includes additional information, such as molecular weight and molecular formula. Additionally, an SRS Internal Tracking Number uniquely identifies each substance, enabling cross-walking between synonyms. EPA is providing a large .ZIP file providing the SRS data in CSV format, and a separate small metadata file in XML containing the field names and definitions.

  1. Division of methods for counting helminths' eggs and the problem of efficiency of these methods.

    PubMed

    Jaromin-Gleń, Katarzyna; Kłapeć, Teresa; Łagód, Grzegorz; Karamon, Jacek; Malicki, Jacek; Skowrońska, Agata; Bieganowski, Andrzej

    2017-03-21

    From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths' eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the 'Split/Spike' method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the 'actual' number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.

  2. Fuzzy α-minimum spanning tree problem: definition and solutions

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan

    2016-04-01

    In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.

  3. Performance evaluation of complete data transfer of physical layer according to IEEE 802.15.4 standard

    NASA Astrophysics Data System (ADS)

    Raju, Kota Solomon; Merugu, Naresh Babu; Neetu, Babu, E. Ram

    2016-03-01

    ZigBee is well-accepted industrial standard for wireless sensor networks based on IEEE 802.15.4 standard. Wireless Sensor Networks is the major concern of communication these days. These Wireless Sensor Networks investigate the properties of networks of small battery-powered sensors with wireless communication. The communication between any two wireless nodes of wireless sensor networks is carried out through a protocol stack. This protocol stack has been designed by different vendors in various ways. Every custom vendor possesses his own protocol stack and algorithms especially at the MAC layer. But, many applications require modifications in their algorithms at various layers as per their requirements, especially energy efficient protocols at MAC layer that are simulated in Wireless sensor Network Simulators which are not being tested in real time systems because vendors do not allow the programmability of each layer in their protocol stack. This problem can be quoted as Vendor-Interoperability. The solution is to develop the programmable protocol stack where we can design our own application as required. As a part of the task first we tried implementing physical layer and transmission of data using physical layer. This paper describes about the transmission of the total number of bytes of Frame according to the IEEE 802.15.4 standard using Physical Layer.

  4. [Comparison and review on specifications of fermented Cordyceps sinensis products].

    PubMed

    Yang, Ping; Zhao, Xiao-Xia; Zhang, Yong-Wen

    2018-02-01

    There are five kinds of fermented Cordyceps crude drug and their preparations that have been approved as medicine on the market. Since the initial strains of the crude drug were all isolated from natural Cordyceps sinensis, they have similar names, chemical components and even clinical applications. However, because of the different strain species and fermentation processes, there was significant difference in quality. As a result, they should be clearly distinguished in clinical use. Most of the products were researched and developed during the 1980s and 1990s, so there was difference in quality standards for different products, and their quality control levels of some products were not perfect. At present, some of the products are approved as Chinese medicine, others are approved as chemical drugs, with a confusion in products name, management and clinical application. In this paper, the approval numbers, quality standards and clinical applications, and current problems of these products were summarized and compared; some suggestions were put forward, such as standardizing the product name, unifying the management of approval number category, and increasing the specific quality control attributes, in order to provide reference for standard implementation, quality control and drug regulation for fermented Cordyceps crude drugs and their preparations. Copyright© by the Chinese Pharmaceutical Association.

  5. The Impact of the Louisiana State University Physics Entrance Requirement on Secondary Physics in Louisiana

    NASA Astrophysics Data System (ADS)

    McCoy, Michael Hanson

    State Department of Education data was examined to determine the number of students enrolled in physics, physics class number, physics teacher number, and physics teacher certification. Census data from public and nonpublic school teachers, principals, and superintendents was analyzed. Purposive sampling of seven public and four nonpublic schools was used for site visitation including observations of physics classes, interviews of teachers and principals, and document acquisition. The literature base was drawn from a call for an increase in academic requirements in the sciences by the National Commission on Excellence in Education, the Southern Regional Education Board, the American Association for Advancement in the Sciences, and numerous state boards of education. LSU is the only major state university to require physics as an academic admission standard. Curriculum changes which influenced general curriculum change were: leveling of physics classes; stressing concepts, algebra, and doing problems in level-one; stressing trigonometry and problem solving in level-two; and increased awareness of expectations for university admission. Certified physics teachers were positive toward the requirement. The majority adopted a "wait-and-see" attitude to see if the university would institute the physics standard. Some physics teachers, nonphysics majors, were opposed to the requirement. Those who were positive remained positive. Those who developed the wait-and-see adopted the leveled physics course concept in 1989 and were positive toward the requirement. College-bound physics was taught prior to the requirement. The State Department of Education leveled physics in 1989. Level-one physics was algebra and conceptual based, level-two physics was trigonometry based, and a level-three physics, advanced placement was added. Enrollment doubled in public schools and increased 40% in nonpublic schools. African-American enrollment almost doubled in public and nonpublic schools. Oriental enrollment increased 40% in public schools. Hispanic enrollment increased 120% in public schools. Female enrollment in public schools increased 27.6% and 10% in nonpublic schools. The number of physics faculty members increased 33% in public schools and 25% in nonpublic schools. Newly certified physics teachers increased 80% although demand exceeded teacher supply. The proportion of certified to noncertified public school physics teachers declined 12% and spiraled downward 25% for nonpublic school physics teachers.

  6. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  7. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    NASA Astrophysics Data System (ADS)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  8. Model-Based Evaluation of Strategies to Control Brucellosis in China.

    PubMed

    Li, Ming-Tao; Sun, Gui-Quan; Zhang, Wen-Yi; Jin, Zhen

    2017-03-12

    Brucellosis, the most common zoonotic disease worldwide, represents a great threat to animal husbandry with the potential to cause enormous economic losses. Brucellosis has become a major public health problem in China, and the number of human brucellosis cases has increased dramatically in recent years. In order to evaluate different intervention strategies to curb brucellosis transmission in China, a novel mathematical model with a general indirect transmission incidence rate was presented. By comparing the results of three models using national human disease data and 11 provinces with high case numbers, the best fitted model with standard incidence was used to investigate the potential for future outbreaks. Estimated basic reproduction numbers were highly heterogeneous, varying widely among provinces. The local basic reproduction numbers of provinces with an obvious increase in incidence were much larger than the average for the country as a whole, suggesting that environment-to-individual transmission was more common than individual-to-individual transmission. We concluded that brucellosis can be controlled through increasing animal vaccination rates, environment disinfection frequency, or elimination rates of infected animals. Our finding suggests that a combination of animal vaccination, environment disinfection, and elimination of infected animals will be necessary to ensure cost-effective control for brucellosis.

  9. Model-Based Evaluation of Strategies to Control Brucellosis in China

    PubMed Central

    Li, Ming-Tao; Sun, Gui-Quan; Zhang, Wen-Yi; Jin, Zhen

    2017-01-01

    Brucellosis, the most common zoonotic disease worldwide, represents a great threat to animal husbandry with the potential to cause enormous economic losses. Brucellosis has become a major public health problem in China, and the number of human brucellosis cases has increased dramatically in recent years. In order to evaluate different intervention strategies to curb brucellosis transmission in China, a novel mathematical model with a general indirect transmission incidence rate was presented. By comparing the results of three models using national human disease data and 11 provinces with high case numbers, the best fitted model with standard incidence was used to investigate the potential for future outbreaks. Estimated basic reproduction numbers were highly heterogeneous, varying widely among provinces. The local basic reproduction numbers of provinces with an obvious increase in incidence were much larger than the average for the country as a whole, suggesting that environment-to-individual transmission was more common than individual-to-individual transmission. We concluded that brucellosis can be controlled through increasing animal vaccination rates, environment disinfection frequency, or elimination rates of infected animals. Our finding suggests that a combination of animal vaccination, environment disinfection, and elimination of infected animals will be necessary to ensure cost-effective control for brucellosis. PMID:28287496

  10. Procedural versus Content-Related Hints for Word Problem Solving: An Exploratory Study

    ERIC Educational Resources Information Center

    Kock, W. D.; Harskamp, E. G.

    2016-01-01

    For primary school students, mathematical word problems are often more difficult to solve than straightforward number problems. Word problems require reading and analysis skills, and in order to explain their situational contexts, the proper mathematical knowledge and number operations have to be selected. To improve students' ability in solving…

  11. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  12. DICOMweb™: Background and Application of the Web Standard for Medical Imaging.

    PubMed

    Genereaux, Brad W; Dennison, Donald K; Ho, Kinson; Horn, Robert; Silver, Elliot Lewis; O'Donnell, Kevin; Kahn, Charles E

    2018-05-10

    This paper describes why and how DICOM, the standard that has been the basis for medical imaging interoperability around the world for several decades, has been extended into a full web technology-based standard, DICOMweb. At the turn of the century, healthcare embraced information technology, which created new problems and new opportunities for the medical imaging industry; at the same time, web technologies matured and began serving other domains well. This paper describes DICOMweb, how it extended the DICOM standard, and how DICOMweb can be applied to problems facing healthcare applications to address workflow and the changing healthcare climate.

  13. Working with low back pain: problem-solving orientation and function.

    PubMed

    Shaw, W S; Feuerstein, M; Haufler, A J; Berkowitz, S M; Lopez, M S

    2001-08-01

    A number of ergonomic, workplace and individual psychosocial factors and health behaviors have been associated with the onset, exacerbation and/or maintenance of low back pain (LBP). The functional impact of these factors may be influenced by how a worker approaches problems in general. The present study was conducted to determine whether problem-solving orientation was associated with physical and mental health outcomes in fully employed workers (soldiers) reporting a history of LBP in the past year. The sample consisted of 475 soldiers (446 male, 29 female; mean age 24.5 years) who worked in jobs identified as high risk for LBP-related disability and reported LBP symptoms in the past 12 months. The Social Problem-Solving Inventory and the Standard Form-12 (SF-12) were completed by all subjects. Hierarchical multiple regression analyses were used to predict the SF-12 physical health summary scale from interactions of LBP symptoms with each of five problem-solving subscales. Low scores on positive problem-solving orientation (F(1,457)=4.49), and high scores on impulsivity/carelessness (F(1,457)=9.11) were associated with a steeper gradient in functional loss related to LBP. Among those with a longer history of low-grade LBP, an avoidant approach to problem-solving was also associated with a steeper gradient of functional loss (three-way interaction; F(1,458)=4.58). These results suggest that the prolonged impact of LBP on daily function may be reduced by assisting affected workers to conceptualize LBP as a problem that can be overcome and using strategies that promote taking an active role in reducing risks for LBP. Secondary prevention efforts may be improved by addressing these factors.

  14. Filter design for the detection of compact sources based on the Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.

    2005-05-01

    This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.

  15. Terminology and global standardization of endoscopic information: Minimal Standard Terminology (MST).

    PubMed

    Fujino, Masayuki A; Bito, Shigeru; Takei, Kazuko; Mizuno, Shigeto; Yokoi, Hideto

    2006-01-01

    Since 1994, following the leading efforts by the European Society for Gastrointestinal Endoscopy, Organisation Mondiale d'Endoscopie Digestive (OMED) has succeeded in compiling minimal number of terms required for computer generation of digestive endoscopy reports nicknamed MST (Minimal Standard Terminology). Though with some insufficiencies, and though developed only for digestive endoscopy, MST has been the only available terminology that is globally standardized in medicine. By utilizing the merits of a unified, structured terminology that can be used in multiple languages we can utilize the data stored in different languages as a common database. For this purpose, a standing, terminology-managing organization that manages and maintains and, when required, expands the terminology on a global level, is absolutely necessary. Unfortunately, however, the organization that performs version control of MST (OMED terminology, standardization and data processing committee) is currently suspending its activity. Medical practice of the world demands more and more specialization, with resultant needs for information exchange among specialized territories. As the cooperation between endoscopy and pathology has become currently the most important problem in the Endoscopy Working Group of Integrating Healthcare Enterprise-Japan (IHE-J,) the cooperation among different specialties is essential. There are DICOM or HL7 standards as the protocols for storage, and exchange (communication) of the data, but there is yet no organization that manages the terminology itself astride different specialties. We hereby propose to establish, within IEEE, for example, a system that promotes standardization of the terminology that can transversely describe a patient, and that can control different societies and groups, as far as the terminology is concerned.

  16. Recommendation of short tandem repeat profiling for authenticating human cell lines, stem cells, and tissues

    PubMed Central

    Barallon, Rita; Bauer, Steven R.; Butler, John; Capes-Davis, Amanda; Dirks, Wilhelm G.; Furtado, Manohar; Kline, Margaret C.; Kohara, Arihiro; Los, Georgyi V.; MacLeod, Roderick A. F.; Masters, John R. W.; Nardone, Mark; Nardone, Roland M.; Nims, Raymond W.; Price, Paul J.; Reid, Yvonne A.; Shewale, Jaiprakash; Sykes, Gregory; Steuer, Anton F.; Storts, Douglas R.; Thomson, Jim; Taraporewala, Zenobia; Alston-Roberts, Christine; Kerrigan, Liz

    2010-01-01

    Cell misidentification and cross-contamination have plagued biomedical research for as long as cells have been employed as research tools. Examples of misidentified cell lines continue to surface to this day. Efforts to eradicate the problem by raising awareness of the issue and by asking scientists voluntarily to take appropriate actions have not been successful. Unambiguous cell authentication is an essential step in the scientific process and should be an inherent consideration during peer review of papers submitted for publication or during review of grants submitted for funding. In order to facilitate proper identity testing, accurate, reliable, inexpensive, and standardized methods for authentication of cells and cell lines must be made available. To this end, an international team of scientists is, at this time, preparing a consensus standard on the authentication of human cells using short tandem repeat (STR) profiling. This standard, which will be submitted for review and approval as an American National Standard by the American National Standards Institute, will provide investigators guidance on the use of STR profiling for authenticating human cell lines. Such guidance will include methodological detail on the preparation of the DNA sample, the appropriate numbers and types of loci to be evaluated, and the interpretation and quality control of the results. Associated with the standard itself will be the establishment and maintenance of a public STR profile database under the auspices of the National Center for Biotechnology Information. The consensus standard is anticipated to be adopted by granting agencies and scientific journals as appropriate methodology for authenticating human cell lines, stem cells, and tissues. PMID:20614197

  17. Recommendation of short tandem repeat profiling for authenticating human cell lines, stem cells, and tissues.

    PubMed

    Barallon, Rita; Bauer, Steven R; Butler, John; Capes-Davis, Amanda; Dirks, Wilhelm G; Elmore, Eugene; Furtado, Manohar; Kline, Margaret C; Kohara, Arihiro; Los, Georgyi V; MacLeod, Roderick A F; Masters, John R W; Nardone, Mark; Nardone, Roland M; Nims, Raymond W; Price, Paul J; Reid, Yvonne A; Shewale, Jaiprakash; Sykes, Gregory; Steuer, Anton F; Storts, Douglas R; Thomson, Jim; Taraporewala, Zenobia; Alston-Roberts, Christine; Kerrigan, Liz

    2010-10-01

    Cell misidentification and cross-contamination have plagued biomedical research for as long as cells have been employed as research tools. Examples of misidentified cell lines continue to surface to this day. Efforts to eradicate the problem by raising awareness of the issue and by asking scientists voluntarily to take appropriate actions have not been successful. Unambiguous cell authentication is an essential step in the scientific process and should be an inherent consideration during peer review of papers submitted for publication or during review of grants submitted for funding. In order to facilitate proper identity testing, accurate, reliable, inexpensive, and standardized methods for authentication of cells and cell lines must be made available. To this end, an international team of scientists is, at this time, preparing a consensus standard on the authentication of human cells using short tandem repeat (STR) profiling. This standard, which will be submitted for review and approval as an American National Standard by the American National Standards Institute, will provide investigators guidance on the use of STR profiling for authenticating human cell lines. Such guidance will include methodological detail on the preparation of the DNA sample, the appropriate numbers and types of loci to be evaluated, and the interpretation and quality control of the results. Associated with the standard itself will be the establishment and maintenance of a public STR profile database under the auspices of the National Center for Biotechnology Information. The consensus standard is anticipated to be adopted by granting agencies and scientific journals as appropriate methodology for authenticating human cell lines, stem cells, and tissues.

  18. [Problems Inherent in Attempting Standardization of Libraries.

    ERIC Educational Resources Information Center

    Port, Idelle

    In setting standards for a large and geographically dispersed library system, one must reconcile the many varying practices that affect what is being measured or discussed. The California State University and Colleges (CSUC) consists of 19 very distinct campuses. The problems and solutions of one type of CSUC library are not likely to be those of…

  19. Traumatic brain injury and adverse life events: Group differences in young adults injured as children.

    PubMed

    Taylor, Olivia; Barrett, Robert D; McLellan, Tracey; McKinlay, Audrey

    2015-01-01

    To investigate whether individuals with a history of traumatic brain injury (TBI) experience a greater number of adverse life events (ALE) compared to controls, to identify significant predictors of experiencing ALE and whether the severity of childhood TBI negatively influences adult life outcomes. A total of 167 individuals, injured prior to age 18, 5 or more years post-injury and 18 or more years of age, were recruited in the Canterbury region of New Zealand, with 124 having sustained childhood TBI (62 mild, 62 moderate/severe) and 43 orthopaedic injury controls. Participants were asked about ALE they had experienced and other adult life outcomes. Individuals with a history of TBI experienced more ALE compared to controls. The number of ALE experienced by an individual was associated with more visits to the doctor, lower education level and lower satisfaction with material standard of living. Childhood TBI is associated with an increased number of ALE and adult negative life outcomes. Understanding factors that contribute to negative outcomes following childhood TBI will provide an avenue for rehabilitation and support to reduce any problems in adulthood.

  20. Design and bidding of UV disinfection equipment -- Case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyurek, M.

    1998-07-01

    Ultraviolet (UV) disinfection systems are being widely considered for application to treated wastewaters, in lieu of conventional chlorination facilities. The number of UV systems operating in the US was approximately 50 in 1984. In 1990 there were over 500 systems, a ten-fold increase. The use of UV disinfection has increased since 1990, and will likely to increase in the future. It is anticipated that as many chlorine disinfection facilities reach their useful life, most of them will be replaced with UV disinfection systems. Several manufacturers offer different UV disinfection equipment. Each offers something different for the designer. There are alsomore » different approaches used in estimating the number of lamps needed for the disinfection system. The lack of standardization in determination of the number of lamps for a UV system poses problems for the designer. Such was the case during the design of the disinfection system for the Watertown, SD Wastewater Treatment Plant (WWRP). The purpose of this paper is to present a case study for the design and bidding of UV disinfection equipment.« less

Top