An electromagnetism-like metaheuristic for open-shop problems with no buffer
NASA Astrophysics Data System (ADS)
Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi
2012-12-01
This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Large-for-size liver transplant: a single-center experience.
Akdur, Aydincan; Kirnap, Mahir; Ozcay, Figen; Sezgin, Atilla; Ayvazoglu Soy, Hatice Ebru; Karakayali Yarbug, Feza; Yildirim, Sedat; Moray, Gokhan; Arslan, Gulnaz; Haberal, Mehmet
2015-04-01
The ideal ratio between liver transplant graft mass and recipient body weight is unknown, but the graft probably must weigh 0.8% to 2.0% recipient weight. When this ratio > 4%, there may be problems due to large-for-size transplant, especially in recipients < 10 kg. This condition is caused by discrepancy between the small abdominal cavity and large graft and is characterized by decreased blood supply to the liver graft and graft dysfunction. We evaluated our experience with large-for-size grafts. We retrospectively evaluated 377 orthotopic liver transplants that were performed from 2001-2014 in our center. We included 188 pediatric transplants in our study. There were 58 patients < 10 kg who had living-donor living transplant with graft-to-bodyweight ratio > 4%. In 2 patients, the abdomen was closed with a Bogota bag. In 5 patients, reoperation was performed due to vascular problems and abdominal hypertension, and the abdomen was closed with a Bogota bag. All Bogota bags were closed in 2 weeks. After closing the fascia, 10 patients had vascular problems that were diagnosed in the operating room by Doppler ultrasonography, and only the skin was closed without fascia closure. No graft loss occurred due to large-for-size transplant. There were 8 patients who died early after transplant (sepsis, 6 patients; brain death, 2 patients). There was no major donor morbidity or donor mortality. Large-for-size graft may cause abdominal compartment syndrome due to the small size of the recipient abdominal cavity, size discrepancies in vascular caliber, insufficient portal circulation, and disturbance of tissue oxygenation. Abdominal closure with a Bogota bag in these patients is safe and effective to avoid abdominal compartment syndrome. Early diagnosis by ultrasonography in the operating room after fascia closure and repeated ultrasonography at the clinic may help avoid graft loss.
The nonequilibrium quantum many-body problem as a paradigm for extreme data science
NASA Astrophysics Data System (ADS)
Freericks, J. K.; Nikolić, B. K.; Frieder, O.
2014-12-01
Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.
ERIC Educational Resources Information Center
Ayeni, Olapade Grace; Olowe, Modupe Oluwatoyin
2016-01-01
Large class size is one of the problems in the educational sector that developing nations have been grappling with. Nigeria as a developing nation is no exception. The purpose of this study is to provide views of both lecturers and students on large class size and how it affects teaching and learning in tertiary institutions in Ekiti State of…
Family size and effective population size in a hatchery stock of coho salmon (Oncorhynchus kisutch)
Simon, R.C.; McIntyre, J.D.; Hemmingsen, A.R.
1986-01-01
Means and variances of family size measured in five year-classes of wire-tagged coho salmon (Oncorhynchus kisutch) were linearly related. Population effective size was calculated by using estimated means and variances of family size in a 25-yr data set. Although numbers of age 3 adults returning to the hatchery appeared to be large enough to avoid inbreeding problems (the 25-yr mean exceeded 4500), the numbers actually contributing to the hatchery production may be too low. Several strategies are proposed to correct the problem perceived. Argument is given to support the contention that the problem of effective size is fairly general and is not confined to the present study population.
The neural bases of the multiplication problem-size effect across countries
Prado, Jérôme; Lu, Jiayan; Liu, Li; Dong, Qi; Zhou, Xinlin; Booth, James R.
2013-01-01
Multiplication problems involving large numbers (e.g., 9 × 8) are more difficult to solve than problems involving small numbers (e.g., 2 × 3). Behavioral research indicates that this problem-size effect might be due to different factors across countries and educational systems. However, there is no neuroimaging evidence supporting this hypothesis. Here, we compared the neural correlates of the multiplication problem-size effect in adults educated in China and the United States. We found a greater neural problem-size effect in Chinese than American participants in bilateral superior temporal regions associated with phonological processing. However, we found a greater neural problem-size effect in American than Chinese participants in right intra-parietal sulcus (IPS) associated with calculation procedures. Therefore, while the multiplication problem-size effect might be a verbal retrieval effect in Chinese as compared to American participants, it may instead stem from the use of calculation procedures in American as compared to Chinese participants. Our results indicate that differences in educational practices might affect the neural bases of symbolic arithmetic. PMID:23717274
Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Conway, Bruce A.
2005-01-01
Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.
Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk
2014-01-01
In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735
Free lipid and computerized determination of adipocyte size.
Svensson, Henrik; Olausson, Daniel; Holmäng, Agneta; Jennische, Eva; Edén, Staffan; Lönn, Malin
2018-06-21
The size distribution of adipocytes in a suspension, after collagenase digestion of adipose tissue, can be determined by computerized image analysis. Free lipid, forming droplets, in such suspensions implicates a bias since droplets present in the images may be identified as adipocytes. This problem is not always adjusted for and some reports state that distinguishing droplets and cells is a considerable problem. In addition, if the droplets originate mainly from rupture of large adipocytes, as often described, this will also bias size analysis. We here confirm that our ordinary manual means of distinguishing droplets and adipocytes in the images ensure correct and rapid identification before exclusion of the droplets. Further, in our suspensions, prepared with focus on gentle handling of tissue and cells, we find no association between the amount of free lipid and mean adipocyte size or proportion of large adipocytes.
Divergent estimation error in portfolio optimization and in linear regression
NASA Astrophysics Data System (ADS)
Kondor, I.; Varga-Haszonits, I.
2008-08-01
The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.
NASA Astrophysics Data System (ADS)
Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min
2014-09-01
In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.
Stocking levels and underlying assumptions for uneven-aged Ponderosa Pine stands.
P.H. Cochran
1992-01-01
Potential Problems With Q-Values Many ponderosa pine stands have a limited number of size classes, and it may be desirable to carry very large trees through several cutting cycles. Large numbers of trees below commercial size are not needed to provide adequate numbers of future replacement trees. Under these conditions, application of stand density index (SDI) can have...
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Planning applications of remote sensing in Arizona
NASA Technical Reports Server (NTRS)
Clark, R. B.; Mouat, D. A.
1976-01-01
Planners in Arizona have been experiencing the inevitable problems which occur when large areas of rural and remote lands are converted to urban-recreational uses over a relatively short period of time. Among the planning problems in the state are unplanned and illegal subdivisions, surburban sprawl, surface hydrologic problems related to ephemeral stream overflow, rapidly changing land use patterns, large size of administrative units, and lack of land use inventory data upon which to base planning decisions.
The choice of sample size: a mixed Bayesian / frequentist approach.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John
2009-04-01
Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
ERIC Educational Resources Information Center
Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.
2018-01-01
Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
NASA Astrophysics Data System (ADS)
Pei, Jun; Liu, Xinbao; Pardalos, Panos M.; Fan, Wenjuan; Wang, Ling; Yang, Shanlin
2016-03-01
Motivated by applications in manufacturing industry, we consider a supply chain scheduling problem, where each job is characterised by non-identical sizes, different release times and unequal processing times. The objective is to minimise the makespan by making batching and sequencing decisions. The problem is formalised as a mixed integer programming model and proved to be strongly NP-hard. Some structural properties are presented for both the general case and a special case. Based on these properties, a lower bound is derived, and a novel two-phase heuristic (TP-H) is developed to solve the problem, which guarantees to obtain a worst case performance ratio of ?. Computational experiments with a set of different sizes of random instances are conducted to evaluate the proposed approach TP-H, which is superior to another two heuristics proposed in the literature. Furthermore, the experimental results indicate that TP-H can effectively and efficiently solve large-size problems in a reasonable time.
Identifying Communication Barriers to Learning in Large Group Accounting Instruction.
ERIC Educational Resources Information Center
Doran, Martha S.; Golen, Steven
1998-01-01
Classroom communication barriers were identified by 291 financial accounting and 372 managerial accounting students. Both groups thought the greatest problems in large group instruction were too much information given in lectures, large class size, and lack of interest in the subject matter. (SK)
Influence of the large-small split effect on strategy choice in complex subtraction.
Xiang, Yan Hui; Wu, Hao; Shang, Rui Hong; Chao, Xiaomei; Ren, Ting Ting; Zheng, Li Ling; Mo, Lei
2018-04-01
Two main theories have been used to explain the arithmetic split effect: decision-making process theory and strategy choice theory. Using the inequality paradigm, previous studies have confirmed that individuals tend to adopt a plausibility-checking strategy and a whole-calculation strategy to solve large and small split problems in complex addition arithmetic, respectively. This supports strategy choice theory, but it is unknown whether this theory also explains performance in solving different split problems in complex subtraction arithmetic. This study used small, intermediate and large split sizes, with each split condition being further divided into problems requiring and not requiring borrowing. The reaction times (RTs) for large and intermediate splits were significantly shorter than those for small splits, while accuracy was significantly higher for large and middle splits than for small splits, reflecting no speed-accuracy trade-off. Further, RTs and accuracy differed significantly between the borrow and no-borrow conditions only for small splits. This study indicates that strategy choice theory is suitable to explain the split effect in complex subtraction arithmetic. That is, individuals tend to choose the plausibility-checking strategy or the whole-calculation strategy according to the split size. © 2016 International Union of Psychological Science.
Coordinated interaction of two hydraulic cylinders when moving large-sized objects
NASA Astrophysics Data System (ADS)
Kreinin, G. V.; Misyurin, S. Yu; Lunev, A. V.
2017-12-01
The problem of the choice of parameters and the control scheme of the dynamics system for the coordinated displacement of a large mass object by two hydraulic piston type engines is considered. As a first stage, the problem is solved with respect to a system in which a heavy load of relatively large geometric dimensions is lifted or lowered in the progressive motion by two unidirectional hydraulic cylinders while maintaining the plane of the lifted object in a strictly horizontal position.
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S.
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems asmore » well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of« less
De Visscher, Alice; Vogel, Stephan E; Reishofer, Gernot; Hassler, Eva; Koschutnig, Karl; De Smedt, Bert; Grabner, Roland H
2018-05-15
In the development of math ability, a large variability of performance in solving simple arithmetic problems is observed and has not found a compelling explanation yet. One robust effect in simple multiplication facts is the problem size effect, indicating better performance for small problems compared to large ones. Recently, behavioral studies brought to light another effect in multiplication facts, the interference effect. That is, high interfering problems (receiving more proactive interference from previously learned problems) are more difficult to retrieve than low interfering problems (in terms of physical feature overlap, namely the digits, De Visscher and Noël, 2014). At the behavioral level, the sensitivity to the interference effect is shown to explain individual differences in the performance of solving multiplications in children as well as in adults. The aim of the present study was to investigate the individual differences in multiplication ability in relation to the neural interference effect and the neural problem size effect. To that end, we used a paradigm developed by De Visscher, Berens, et al. (2015) that contrasts the interference effect and the problem size effect in a multiplication verification task, during functional magnetic resonance imaging (fMRI) acquisition. Forty-two healthy adults, who showed high variability in an arithmetic fluency test, participated in our fMRI study. In order to control for the general reasoning level, the IQ was taken into account in the individual differences analyses. Our findings revealed a neural interference effect linked to individual differences in multiplication in the left inferior frontal gyrus, while controlling for the IQ. This interference effect in the left inferior frontal gyrus showed a negative relation with individual differences in arithmetic fluency, indicating a higher interference effect for low performers compared to high performers. This region is suggested in the literature to be involved in resolution of proactive interference. Besides, no correlation between the neural problem size effect and multiplication performance was found. This study supports the idea that the interference due to similarities/overlap of physical traits (the digits) is crucial in memorizing arithmetic facts and in determining individual differences in arithmetic. Copyright © 2018 Elsevier Inc. All rights reserved.
Analytical sizing methods for behind-the-meter battery storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael; Yang, Tao
In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less
Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids
NASA Astrophysics Data System (ADS)
Abbà, Antonella; Campaniello, Dario; Nini, Michele
2017-06-01
The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.
Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem
NASA Astrophysics Data System (ADS)
Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang
2015-09-01
A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.
Solving lot-sizing problem with quantity discount and transportation cost
NASA Astrophysics Data System (ADS)
Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei
2013-04-01
Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.
Horesh, Yair; Wexler, Ydo; Lebenthal, Ilana; Ziv-Ukelson, Michal; Unger, Ron
2009-03-04
Scanning large genomes with a sliding window in search of locally stable RNA structures is a well motivated problem in bioinformatics. Given a predefined window size L and an RNA sequence S of size N (L < N), the consecutive windows folding problem is to compute the minimal free energy (MFE) for the folding of each of the L-sized substrings of S. The consecutive windows folding problem can be naively solved in O(NL3) by applying any of the classical cubic-time RNA folding algorithms to each of the N-L windows of size L. Recently an O(NL2) solution for this problem has been described. Here, we describe and implement an O(NLpsi(L)) engine for the consecutive windows folding problem, where psi(L) is shown to converge to O(1) under the assumption of a standard probabilistic polymer folding model, yielding an O(L) speedup which is experimentally confirmed. Using this tool, we note an intriguing directionality (5'-3' vs. 3'-5') folding bias, i.e. that the minimal free energy (MFE) of folding is higher in the native direction of the DNA than in the reverse direction of various genomic regions in several organisms including regions of the genomes that do not encode proteins or ncRNA. This bias largely emerges from the genomic dinucleotide bias which affects the MFE, however we see some variations in the folding bias in the different genomic regions when normalized to the dinucleotide bias. We also present results from calculating the MFE landscape of a mouse chromosome 1, characterizing the MFE of the long ncRNA molecules that reside in this chromosome. The efficient consecutive windows folding engine described in this paper allows for genome wide scans for ncRNA molecules as well as large-scale statistics. This is implemented here as a software tool, called RNAslider, and applied to the scanning of long chromosomes, leading to the observation of features that are visible only on a large scale.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Reasoning by analogy as an aid to heuristic theorem proving.
NASA Technical Reports Server (NTRS)
Kling, R. E.
1972-01-01
When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.
Klügl, Ines; Hiller, Karl-Anton; Landthaler, Michael; Bäumler, Wolfgang
2010-08-01
Millions of people are tattooed. However, the frequency of health problems is unknown. We performed an Internet survey in German-speaking countries. The provenance of tattooed participants (n = 3,411) was evenly distributed in Germany. The participants had many (28%; >4) and large tattoos (36%; >or=900 cm(2)). After tattooing, the people described skin problems (67.5%) or systemic reactions (6.6%). Four weeks after tattooing, 9% still had health problems. Six percent reported persistent health problems due to the tattoo, of which females (7.3%) were more frequently concerned than males (4.2%). Colored tattoos provoked more short-term skin (p = 0.003) or systemic (p = 0.0001) reactions than black tattoos. Also the size of tattoos and the age at the time of tattooing play a significant role in many health problems. Our results show that millions of people in the Western world supposedly have transient or persisting health problems after tattooing. Owing to the large number and size of the tattoos, tattooists inject several grams of tattoo colorants into the skin, which partly spread in the human body and stay for a lifetime. The latter might cause additional health problems in the long term. Copyright 2010 S. Karger AG, Basel.
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
NASA Astrophysics Data System (ADS)
Gao, X.-L.; Ma, H. M.
2010-05-01
A solution for Eshelby's inclusion problem of a finite homogeneous isotropic elastic body containing an inclusion prescribed with a uniform eigenstrain and a uniform eigenstrain gradient is derived in a general form using a simplified strain gradient elasticity theory (SSGET). An extended Betti's reciprocal theorem and an extended Somigliana's identity based on the SSGET are proposed and utilized to solve the finite-domain inclusion problem. The solution for the disturbed displacement field is expressed in terms of the Green's function for an infinite three-dimensional elastic body in the SSGET. It contains a volume integral term and a surface integral term. The former is the same as that for the infinite-domain inclusion problem based on the SSGET, while the latter represents the boundary effect. The solution reduces to that of the infinite-domain inclusion problem when the boundary effect is not considered. The problem of a spherical inclusion embedded concentrically in a finite spherical elastic body is analytically solved by applying the general solution, with the Eshelby tensor and its volume average obtained in closed forms. This Eshelby tensor depends on the position, inclusion size, matrix size, and material length scale parameter, and, as a result, can capture the inclusion size and boundary effects, unlike existing Eshelby tensors. It reduces to the classical Eshelby tensor for the spherical inclusion in an infinite matrix if both the strain gradient and boundary effects are suppressed. Numerical results quantitatively show that the inclusion size effect can be quite large when the inclusion is very small and that the boundary effect can dominate when the inclusion volume fraction is very high. However, the inclusion size effect is diminishing as the inclusion becomes large enough, and the boundary effect is vanishing as the inclusion volume fraction gets sufficiently low.
Monte-Carlo simulation of a stochastic differential equation
NASA Astrophysics Data System (ADS)
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
Private Education as a Policy Tool in Turkey
ERIC Educational Resources Information Center
Cinoglu, Mustafa
2006-01-01
This paper discusses privatization as policy tool to solve educational problems in Turkey. Turkey, as a developing country, is faced with many problems in education. Large class size, low enrollment rate, girl's education, high illiteracy rate, religious education, textbooks, curriculum and multicultural education are some of the important…
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
Two-Dimensional Crystallography Introduced by the Sprinkler Watering Problem
ERIC Educational Resources Information Center
De Toro, Jose A.; Calvo, Gabriel F.; Muniz, Pablo
2012-01-01
The problem of optimizing the number of circular sprinklers watering large fields is used to introduce, from a purely elementary geometrical perspective, some basic concepts in crystallography and comment on a few size effects in condensed matter physics. We examine square and hexagonal lattices to build a function describing the, so-called, dry…
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Structural performance analysis and redesign
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1978-01-01
Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Statistical theory and methodology for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Odell, P. L.
1974-01-01
A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Study on the temperature field of large-sized sapphire single crystal furnace
NASA Astrophysics Data System (ADS)
Zhai, J. P.; Jiang, J. W.; Liu, K. G.; Peng, X. B.; Jian, D. L.; Li, I. L.
2018-01-01
In this paper, the temperature field of large-sized (120kg, 200kg and 300kg grade) sapphire single crystal furnace was simulated. By keeping the crucible diameter ratio and the insulation system unchanged, the power consumption, axial and radial temperature gradient, solid-liquid surface shape, stress distribution and melt flow were studied. The simulation results showed that with the increase of the single crystal furnace size, the power consumption increased, the temperature field insulation effect became worse, the growth stress value increased and the stress concentration phenomenon occurred. To solve these problems, the middle and bottom insulation system should be enhanced during designing the large-sized sapphire single crystal furnace. The appropriate radial and axial temperature gradient was favorable to reduce the crystal stress and prevent the occurrence of cracking. Expanding the interface between the seed and crystal was propitious to avoid the stress accumulation phenomenon.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umar, V.M.; Fischer, C.F.
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range ofmore » matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.« less
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Routing and Addressing Problems in Large Metropolitan-Scale Internetworks. ISI Research Report.
ERIC Educational Resources Information Center
Finn, Gregory G.
This report discusses some of the problems and limitations in existing internetwork design for the connection of packet-switching networks of different technologies and presents an algorithm that has been shown to be suitable for internetworks of unbounded size. Using a new form of address and a flat routing mechanism called Cartesian routing,…
Some practical aspects of designing a large inventory
Kim Iles
2000-01-01
Designing a large multiresource inventory is horribly difficult, it is not work for faint hearts, and the 250,000,000-acre size of British Columbia is not a trivial problem. Completion of the details (as opposed to stopping the work) is a process prone to collapse. After that point, implementation is also likely to fail.
Large-Scale Constraint-Based Pattern Mining
ERIC Educational Resources Information Center
Zhu, Feida
2009-01-01
We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…
Teaching Large Classes in Higher Education. How To Maintain Quality with Reduced Resources.
ERIC Educational Resources Information Center
Gibbs, Graham, Ed.; Jenkins, Alan, Ed.
This publication seeks to give practical assistance to teachers and administrators responsible for teaching large classes at collges and universities in the United Kingdom. Areas covered include class size, problems related to learning and teaching, teaching strategies in specific disciplines, field study experience and other subjects. The 12…
Correlation between Academic and Skills-Based Tests in Computer Networks
ERIC Educational Resources Information Center
Buchanan, William
2006-01-01
Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…
A comparison of quality and utilization problems in large and small group practices.
Gleason, S C; Richards, M J; Quinnell, J E
1995-12-01
Physicians practicing in large, multispecialty medical groups share an organizational culture that differs from that of physicians in small or independent practices. Since 1980, there has been a sharp increase in the size of multispecialty group practice organizations, in part because of increased efficiencies of large group practices. The greater number of physicians and support personnel in a large group practice also requires a relatively more sophisticated management structure. The efficiencies, conveniences, and management structure of a large group practice provide an optimal environment to practice medicine. However, a search of the literature found no data linking a large group practice environment to practice outcomes. The purpose of the study reported in this article was to determine if physicians in large practices have fewer quality and utilization problems than physicians in small or independent practices.
NASA Technical Reports Server (NTRS)
Bynum, B. G.; Gause, R. L.; Spier, R. A.
1971-01-01
System overcomes previous ergometer design and calibration problems including inaccurate measurements, large weight, size, and input power requirements, poor heat dissipation, high flammability, and inaccurate calibration. Device consists of lightweight, accurately controlled ergometer, restraint system, and calibration system.
A learning approach to the bandwidth multicolouring problem
NASA Astrophysics Data System (ADS)
Akbari Torkestani, Javad
2016-05-01
In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
Heitmuller, Franklin T.; Asquith, William H.
2008-01-01
The Texas Department of Transportation spends considerable money for maintenance and replacement of low-water crossings of streams in the Edwards Plateau in Central Texas as a result of damages caused in part by the transport of cobble- and gravel-sized bed material. An investigation of the problem at low-water crossings was made by the U.S. Geological Survey in cooperation with the Texas Department of Transportation, and in collaboration with Texas Tech University, Lamar University, and the University of Houston. The bed-material entrainment problem for low-water crossings occurs at two spatial scales - watershed scale and channel-reach scale. First, the relative abundance and activity of cobble- and gravel-sized bed material along a given channel reach becomes greater with increasingly steeper watershed slopes. Second, the stresses required to mobilize bed material at a location can be attributed to reach-scale hydraulic factors, including channel geometry and particle size. The frequency of entrainment generally increases with downstream distance, as a result of decreasing particle size and increased flood magnitudes. An average of 1 year occurs between flows that initially entrain bed material as large as the median particle size, and an average of 1.5 years occurs between flows that completely entrain bed material as large as the median particle size. The Froude numbers associated with initial and complete entrainment of bed material up to the median particle size approximately are 0.40 and 0.45, respectively.
Variation in ejecta size with ejection velocity
NASA Technical Reports Server (NTRS)
Vickery, Ann M.
1987-01-01
The sizes and ranges of over 25,000 secondary craters around twelve large primaries on three different planets were measured and used to infer the size-velocity distribution of that portion of the primary crater ejecta that produced the secondaries. The ballistic equation for spherical bodies was used to convert the ranges to velocities, and the velocities and crater sizes were used in the appropriate Schmidt-Holsapple scaling relation of estimate ejecta sizes, and the velocity exponent was determined. The latter are generally between -1 and -13, with an average value of about -1.9. Problems with data collection made it impossible to determine a simple, unique relation between size and velocity.
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
Johnson, K E; McMorris, B J; Raynor, L A; Monsen, K A
2013-01-01
The Omaha System is a standardized interface terminology that is used extensively by public health nurses in community settings to document interventions and client outcomes. Researchers using Omaha System data to analyze the effectiveness of interventions have typically calculated p-values to determine whether significant client changes occurred between admission and discharge. However, p-values are highly dependent on sample size, making it difficult to distinguish statistically significant changes from clinically meaningful changes. Effect sizes can help identify practical differences but have not yet been applied to Omaha System data. We compared p-values and effect sizes (Cohen's d) for mean differences between admission and discharge for 13 client problems documented in the electronic health records of 1,016 young low-income parents. Client problems were documented anywhere from 6 (Health Care Supervision) to 906 (Caretaking/parenting) times. On a scale from 1 to 5, the mean change needed to yield a large effect size (Cohen's d ≥ 0.80) was approximately 0.60 (range = 0.50 - 1.03) regardless of p-value or sample size (i.e., the number of times a client problem was documented in the electronic health record). Researchers using the Omaha System should report effect sizes to help readers determine which differences are practical and meaningful. Such disclosures will allow for increased recognition of effective interventions.
Small-size pedestrian detection in large scene based on fast R-CNN
NASA Astrophysics Data System (ADS)
Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu
2018-04-01
Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lichtner, Peter C.; Hammond, Glenn E.; Lu, Chuan
PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Writtenmore » in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 2 32 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.« less
Link, W.A.
2003-01-01
Heterogeneity in detection probabilities has long been recognized as problematic in mark-recapture studies, and numerous models developed to accommodate its effects. Individual heterogeneity is especially problematic, in that reasonable alternative models may predict essentially identical observations from populations of substantially different sizes. Thus even with very large samples, the analyst will not be able to distinguish among reasonable models of heterogeneity, even though these yield quite distinct inferences about population size. The problem is illustrated with models for closed and open populations.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
A hybrid metaheuristic for closest string problem.
Mousavi, Sayyed Rasoul
2011-01-01
The Closest String Problem (CSP) is an optimisation problem, which is to obtain a string with the minimum distance from a number of given strings. In this paper, a new metaheuristic algorithm is investigated for the problem, whose main feature is relatively high speed in obtaining good solutions, which is essential when the input size is large. The proposed algorithm is compared with four recent algorithms suggested for the problem, outperforming them in more than 98% of the cases. It is also remarkably faster than all of them, running within 1 s in most of the experimental cases.
REDUCING REFRIGERANT EMISSIONS FROM SUPERMARKET SYSTEMS
Large refrigeration systems are found in several applications including supermarkets, cold storage warehouses, and industrial processes. The sizes of these systems are a contributing factor to their problems of high refrigerant leak rates because of the thousands of connections, ...
Bio-inspired group modeling and analysis for intruder detection in mobile sensor/robotic networks.
Fu, Bo; Xiao, Yang; Liang, Xiannuan; Philip Chen, C L
2015-01-01
Although previous bio-inspired models have concentrated on invertebrates (such as ants), mammals such as primates with higher cognitive function are valuable for modeling the increasingly complex problems in engineering. Understanding primates' social and communication systems, and applying what is learned from them to engineering domains is likely to inspire solutions to a number of problems. This paper presents a novel bio-inspired approach to determine group size by researching and simulating primate society. Group size does matter for both primate society and digital entities. It is difficult to determine how to group mobile sensors/robots that patrol in a large area when many factors are considered such as patrol efficiency, wireless interference, coverage, inter/intragroup communications, etc. This paper presents a simulation-based theoretical study on patrolling strategies for robot groups with the comparison of large and small groups through simulations and theoretical results.
Approaches to eliminate waste and reduce cost for recycling glass.
Chao, Chien-Wen; Liao, Ching-Jong
2011-12-01
In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
NASA Astrophysics Data System (ADS)
Aishah Syed Ali, Sharifah
2017-09-01
This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.
Schaerf, T M; Makinson, J C; Myerscough, M R; Beekman, M
2013-10-06
Reproductive swarms of honeybees are faced with the problem of finding a good site to establish a new colony. We examined the potential effects of swarm size on the quality of nest-site choice through a combination of modelling and field experiments. We used an individual-based model to examine the effects of swarm size on decision accuracy under the assumption that the number of bees actively involved in the decision-making process (scouts) is an increasing function of swarm size. We found that the ability of a swarm to choose the best of two nest sites decreases as swarm size increases when there is some time-lag between discovering the sites, consistent with Janson & Beekman (Janson & Beekman 2007 Proceedings of European Conference on Complex Systems, pp. 204-211.). However, when simulated swarms were faced with a realistic problem of choosing between many nest sites discoverable at all times, larger swarms were more accurate in their decisions than smaller swarms owing to their ability to discover nest sites more rapidly. Our experimental fieldwork showed that large swarms invest a larger number of scouts into the decision-making process than smaller swarms. Preliminary analysis of waggle dances from experimental swarms also suggested that large swarms could indeed discover and advertise nest sites at a faster rate than small swarms.
Schaerf, T. M.; Makinson, J. C.; Myerscough, M. R.; Beekman, M.
2013-01-01
Reproductive swarms of honeybees are faced with the problem of finding a good site to establish a new colony. We examined the potential effects of swarm size on the quality of nest-site choice through a combination of modelling and field experiments. We used an individual-based model to examine the effects of swarm size on decision accuracy under the assumption that the number of bees actively involved in the decision-making process (scouts) is an increasing function of swarm size. We found that the ability of a swarm to choose the best of two nest sites decreases as swarm size increases when there is some time-lag between discovering the sites, consistent with Janson & Beekman (Janson & Beekman 2007 Proceedings of European Conference on Complex Systems, pp. 204–211.). However, when simulated swarms were faced with a realistic problem of choosing between many nest sites discoverable at all times, larger swarms were more accurate in their decisions than smaller swarms owing to their ability to discover nest sites more rapidly. Our experimental fieldwork showed that large swarms invest a larger number of scouts into the decision-making process than smaller swarms. Preliminary analysis of waggle dances from experimental swarms also suggested that large swarms could indeed discover and advertise nest sites at a faster rate than small swarms. PMID:23904590
Large scale systems : a study of computer organizations for air traffic control applications.
DOT National Transportation Integrated Search
1971-06-01
Based on current sizing estimates and tracking algorithms, some computer organizations applicable to future air traffic control computing systems are described and assessed. Hardware and software problem areas are defined and solutions are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanford, M.
1997-12-31
Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less
Nonlinear stability of the 1D Boltzmann equation in a periodic box
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2018-05-01
We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
NASA Astrophysics Data System (ADS)
Baniamerian, Ali; Bashiri, Mahdi; Zabihi, Fahime
2018-03-01
Cross-docking is a new warehousing policy in logistics which is widely used all over the world and attracts many researchers attention to study about in last decade. In the literature, economic aspects has been often studied, while one of the most significant factors for being successful in the competitive global market is improving quality of customer servicing and focusing on customer satisfaction. In this paper, we introduce a vehicle routing and scheduling problem with cross-docking and time windows in a three-echelon supply chain that considers customer satisfaction. A set of homogeneous vehicles collect products from suppliers and after consolidation process in the cross-dock, immediately deliver them to customers. A mixed integer linear programming model is presented for this problem to minimize transportation cost and early/tardy deliveries with scheduling of inbound and outbound vehicles to increase customer satisfaction. A two phase genetic algorithm (GA) is developed for the problem. For investigating the performance of the algorithm, it was compared with exact and lower bound solutions in small and large-size instances, respectively. Results show that there are at least 86.6% customer satisfaction by the proposed method, whereas customer satisfaction in the classical model is at most 33.3%. Numerical examples results show that the proposed two phase algorithm could achieve optimal solutions in small-size instances. Also in large-size instances, the proposed two phase algorithm could achieve better solutions with less gap from the lower bound in less computational time in comparison with the classic GA.
Future orientation, school contexts, and problem behaviors: a multilevel study.
Chen, Pan; Vazsonyi, Alexander T
2013-01-01
The association between future orientation and problem behaviors has received extensive empirical attention; however, previous work has not considered school contextual influences on this link. Using a sample of N = 9,163 9th to 12th graders (51.0 % females) from N = 85 high schools of the National Longitudinal Study of Adolescent Health, the present study examined the independent and interactive effects of adolescent future orientation and school contexts (school size, school location, school SES, school future orientation climate) on problem behaviors. Results provided evidence that adolescent future orientation was associated independently and negatively with problem behaviors. In addition, adolescents from large-size schools reported higher levels of problem behaviors than their age mates from small-size schools, controlling for individual-level covariates. Furthermore, an interaction effect between adolescent future orientation and school future orientation climate was found, suggesting influences of school future orientation climate on the link between adolescent future orientation and problem behaviors as well as variations in effects of school future orientation climate across different levels of adolescent future orientation. Specifically, the negative association between adolescent future orientation and problem behaviors was stronger at schools with a more positive climate of future orientation, whereas school future orientation climate had a significant and unexpectedly positive relationship with problem behaviors for adolescents with low levels of future orientation. Findings implicate the importance of comparing how the future orientation-problem behaviors link varies across different ecological contexts and the need to understand influences of school climate on problem behaviors in light of differences in psychological processes among adolescents.
NASA Astrophysics Data System (ADS)
Dean, David S.; Majumdar, Satya N.
2002-08-01
We study a fragmentation problem where an initial object of size x is broken into m random pieces provided x > x0 where x0 is an atomic cut-off. Subsequently, the fragmentation process continues for each of those daughter pieces whose sizes are bigger than x0. The process stops when all the fragments have sizes smaller than x0. We show that the fluctuation of the total number of splitting events, characterized by the variance, generically undergoes a nontrivial phase transition as one tunes the branching number m through a critical value m = mc. For m < mc, the fluctuations are Gaussian where as for m > mc they are anomalously large and non-Gaussian. We apply this general result to analyse two different search algorithms in computer science.
Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems
Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.
2014-01-01
Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568
Microcomputers in the Anesthesia Library.
ERIC Educational Resources Information Center
Wright, A. J.
The combination of computer technology and library operation is helping to alleviate such library problems as escalating costs, increasing collection size, deteriorating materials, unwieldy arrangement schemes, poor subject control, and the acquisition and processing of large numbers of rarely used documents. Small special libraries such as…
Active space debris removal by using laser propulsion
NASA Astrophysics Data System (ADS)
Rezunkov, Yu. A.
2013-03-01
At present, a few projects on the space debris removal by using highpower lasers are developed. One of the established projects is the ORION proposed by Claude Phipps from Photonics Associates Company and supported by NASA (USA) [1]. But the technical feasibility of the concept is limited by sizes of the debris objects (from 1 to 10 cm) because of a small thrust impulse generated at the laser ablation of the debris materials. At the same time, the removal of rocket upper stages and satellites, which have reached the end of their lives, has been carried out only in a very small number of cases and most of them remain on the Low Earth Orbits (LEO). To reduce the amount of these large-size objects, designing of space systems allowing deorbiting upper rocket stages and removing large-size satellite remnants from economically and scientifically useful orbits to disposal ones is considered. The suggested system is based on high-power laser propulsion. Laser-Orbital Transfer Vehicle (LOTV) with the developed aerospace laser propulsion engine is considered as applied to the problem of mitigation of man-made large-size space debris in LEO.
A Novel Coarsening Method for Scalable and Efficient Mesh Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, A; Hysom, D; Gunney, B
2010-12-02
In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less
Ding, Yongxia; Zhang, Peili
2018-06-12
Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P < 0.01) compared with the control group. In addition, 92.6% of students in the experimental group expressed satisfaction with the new web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.
1994-04-07
detector mated to wide- angle optics to continuously view a large conical volume of space in the vicinity of the orbiting spacecraft . When a debris... large uncertainties. This lack of reliable data for debris particles in the millimeter/centimeter size range presents a problem to spacecraft designers...by smaller particles (<I mm) can be negated by the use of meteor bumpers covering the critical parts of a spacecraft , without incurring too large a
Lower Sensitivity to Happy and Angry Facial Emotions in Young Adults with Psychiatric Problems
Vrijen, Charlotte; Hartman, Catharina A.; Lodder, Gerine M. A.; Verhagen, Maaike; de Jonge, Peter; Oldehinkel, Albertine J.
2016-01-01
Many psychiatric problem domains have been associated with emotion-specific biases or general deficiencies in facial emotion identification. However, both within and between psychiatric problem domains, large variability exists in the types of emotion identification problems that were reported. Moreover, since the domain-specificity of the findings was often not addressed, it remains unclear whether patterns found for specific problem domains can be better explained by co-occurrence of other psychiatric problems or by more generic characteristics of psychopathology, for example, problem severity. In this study, we aimed to investigate associations between emotion identification biases and five psychiatric problem domains, and to determine the domain-specificity of these biases. Data were collected as part of the ‘No Fun No Glory’ study and involved 2,577 young adults. The study participants completed a dynamic facial emotion identification task involving happy, sad, angry, and fearful faces, and filled in the Adult Self-Report Questionnaire, of which we used the scales depressive problems, anxiety problems, avoidance problems, Attention-Deficit Hyperactivity Disorder (ADHD) problems and antisocial problems. Our results suggest that participants with antisocial problems were significantly less sensitive to happy facial emotions, participants with ADHD problems were less sensitive to angry emotions, and participants with avoidance problems were less sensitive to both angry and happy emotions. These effects could not be fully explained by co-occurring psychiatric problems. Whereas this seems to indicate domain-specificity, inspection of the overall pattern of effect sizes regardless of statistical significance reveals generic patterns as well, in that for all psychiatric problem domains the effect sizes for happy and angry emotions were larger than the effect sizes for sad and fearful emotions. As happy and angry emotions are strongly associated with approach and avoidance mechanisms in social interaction, these mechanisms may hold the key to understanding the associations between facial emotion identification and a wide range of psychiatric problems. PMID:27920735
Extending ALE3D, an Arbitrarily Connected hexahedral 3D Code, to Very Large Problem Size (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, A L
2010-12-15
As the number of compute units increases on the ASC computers, the prospect of running previously unimaginably large problems is becoming a reality. In an arbitrarily connected 3D finite element code, like ALE3D, one must provide a unique identification number for every node, element, face, and edge. This is required for a number of reasons, including defining the global connectivity array required for domain decomposition, identifying appropriate communication patterns after domain decomposition, and determining the appropriate load locations for implicit solvers, for example. In most codes, the unique identification number is defined as a 32-bit integer. Thus the maximum valuemore » available is 231, or roughly 2.1 billion. For a 3D geometry consisting of arbitrarily connected hexahedral elements, there are approximately 3 faces for every element, and 3 edges for every node. Since the nodes and faces need id numbers, using 32-bit integers puts a hard limit on the number of elements in a problem at roughly 700 million. The first solution to this problem would be to replace 32-bit signed integers with 32-bit unsigned integers. This would increase the maximum size of a problem by a factor of 2. This provides some head room, but almost certainly not one that will last long. Another solution would be to replace all 32-bit int declarations with 64-bit long long declarations. (long is either a 32-bit or a 64-bit integer, depending on the OS). The problem with this approach is that there are only a few arrays that actually need to extended size, and thus this would increase the size of the problem unnecessarily. In a future computing environment where CPUs are abundant but memory relatively scarce, this is probably the wrong approach. Based on these considerations, we have chosen to replace only the global identifiers with the appropriate 64-bit integer. The problem with this approach is finding all the places where data that is specified as a 32-bit integer needs to be replaced with the 64-bit integer. that need to be replaced. In the rest of this paper we describe the techniques used to facilitate this transformation, issues raised, and issues still to be addressed. This poster will describe the reasons, methods, issues associated with extending the ALE3D code to run problems larger than 700 million elements.« less
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Vega, F F; Cantu-Paz, E; Lopez, J I
The population size of genetic algorithms (GAs) affects the quality of the solutions and the time required to find them. While progress has been made in estimating the population sizes required to reach a desired solution quality for certain problems, in practice the sizing of populations is still usually performed by trial and error. These trials might lead to find a population that is large enough to reach a satisfactory solution, but there may still be opportunities to optimize the computational cost by reducing the size of the population. This paper presents a technique called plague that periodically removes amore » number of individuals from the population as the GA executes. Recently, the usefulness of the plague has been demonstrated for genetic programming. The objective of this paper is to extend the study of plagues to genetic algorithms. We experiment with deceptive trap functions, a tunable difficult problem for GAs, and the experiments show that plagues can save computational time while maintaining solution quality and reliability.« less
Using Mobile Phone Technology in EFL Classes
ERIC Educational Resources Information Center
Sad, Süleyman Nihat
2008-01-01
Teachers of English as a foreign language (EFL) who want to develop successful lessons face numerous challenges, including large class sizes and inadequate instructional materials and technological support. Another problem is unmotivated students who refuse to participate in class activities. According to Harmer (2007), uncooperative and…
Publication Bias in Special Education Meta-Analyses
ERIC Educational Resources Information Center
Gage, Nicholas A.; Cook, Bryan G.; Reichow, Brian
2017-01-01
Publication bias involves the disproportionate representation of studies with large and significant effects in the published research. Among other problems, publication bias results in inflated omnibus effect sizes in meta-analyses, giving the impression that interventions have stronger effects than they actually do. Although evidence suggests…
ERIC Educational Resources Information Center
Cole, Stephanie
2010-01-01
Teaching an introductory survey course in a typical lecture hall presents a series of related obstacles. The large number of students, the size of the room, and the fixed nature of the seating tend to maximize the distance between instructor and students. That distance then grants enrolled students enough anonymity to skip class too frequently and…
Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture
NASA Astrophysics Data System (ADS)
Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo
2018-05-01
The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
A firm size and safety performance profile of the U.S. motor carrier industry : [executive summary].
DOT National Transportation Integrated Search
2015-11-01
Motor carrier crashes continue to present a societal and public policy : problem. Large commercial truck crashes are a topic of serious concern : in Iowa. Statistics illustrate the need to make further progress on the : safety performance of motor ca...
Ecologists are often faced with problem of small sample size, correlated and large number of predictors, and high noise-to-signal relationships. This necessitates excluding important variables from the model when applying standard multiple or multivariate regression analyses. In ...
Visualizing Internet routing changes.
Lad, Mohit; Massey, Dan; Zhang, Lixia
2006-01-01
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
McFarquhar, Tara; Luyten, Patrick; Fonagy, Peter
2018-01-15
Interpersonal problems are commonly reported by depressed patients, but the effect of psychotherapeutic treatment on them remains unclear. This paper reviews the effectiveness of psychotherapeutic interventions for depression on interpersonal problems as measured by the Inventory of Interpersonal Problems (IIP). An electronic database search identified articles reporting IIP outcome scores for individual adult psychotherapy for depression. A systematic review and, where possible, meta-analysis was conducted. Twenty-eight studies met inclusion criteria, 10 of which could be included in a meta-analysis investigating changes in the IIP after brief psychotherapy. Reasons for exclusion from the meta-analysis were too few participants with a diagnosis of depression (n=13), IIP means and SDs unobtainable (n=3) and long-term therapy (n=2). A large effect size (g=0.74, 95% CI=0.56-0.93) was found for improvement in IIP scores after brief treatment. Paucity of IIP reporting and treatment type variability mean results are preliminary. Heterogeneity for improvement in IIP after brief psychotherapy was high (I 2 =75%). Despite being central to theories of depression, interpersonal problems are infrequently included in outcome studies. Brief psychotherapy was associated with moderate to large effect sizes in reduction in interpersonal problems. Of the dimensions underlying interpersonal behaviour, the dominance dimension may be more amenable to change than the affiliation dimension. Yet, high pre-treatment affiliation appeared to be associated with better outcomes than low affiliation, supporting the theory that more affiliative patients may develop a better therapeutic relationship with the therapist and consequently respond more positively than more hostile patients. Copyright © 2017 Elsevier B.V. All rights reserved.
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
Estoup, Arnaud; Jarne, Philippe; Cornuet, Jean-Marie
2002-09-01
Homoplasy has recently attracted the attention of population geneticists, as a consequence of the popularity of highly variable stepwise mutating markers such as microsatellites. Microsatellite alleles generally refer to DNA fragments of different size (electromorphs). Electromorphs are identical in state (i.e. have identical size), but are not necessarily identical by descent due to convergent mutation(s). Homoplasy occurring at microsatellites is thus referred to as size homoplasy. Using new analytical developments and computer simulations, we first evaluate the effect of the mutation rate, the mutation model, the effective population size and the time of divergence between populations on size homoplasy at the within and between population levels. We then review the few experimental studies that used various molecular techniques to detect size homoplasious events at some microsatellite loci. The relationship between this molecularly accessible size homoplasy size and the actual amount of size homoplasy is not trivial, the former being considerably influenced by the molecular structure of microsatellite core sequences. In a third section, we show that homoplasy at microsatellite electromorphs does not represent a significant problem for many types of population genetics analyses realized by molecular ecologists, the large amount of variability at microsatellite loci often compensating for their homoplasious evolution. The situations where size homoplasy may be more problematic involve high mutation rates and large population sizes together with strong allele size constraints.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2017-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2018-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Effects of Stormwater Pipe Size and Rainfall on Sediment and Nutrients Delivered to a Coastal Bayou
Pollutants discharged from stormwater pipes can cause water quality and ecosystem problems in coastal bayous. A study was conducted to characterize sediment and nutrients discharged by small and large (, 20 cm and .20 cm in internal diameters, respectively) pipes under different ...
Estimating the Local Size and Coverage of Interaction Network Regions
ERIC Educational Resources Information Center
Eagle, Michael; Barnes, Tiffany
2015-01-01
Interactive problem solving environments, such as intelligent tutoring systems and educational video games, produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled the student-tutor interactions using complex network…
49 CFR Appendix D to Part 178 - Thermal Resistance Test
Code of Federal Regulations, 2013 CFR
2013-10-01
... large enough in size to fully house the test outer package without clearance problems. The test oven....3Instrumentation. A calibrated recording device or a computerized data acquisition system with an appropriate range... Configuration. Each outer package material type and design must be tested, including any features such as...
49 CFR Appendix D to Part 178 - Thermal Resistance Test
Code of Federal Regulations, 2014 CFR
2014-10-01
... large enough in size to fully house the test outer package without clearance problems. The test oven....3Instrumentation. A calibrated recording device or a computerized data acquisition system with an appropriate range... Configuration. Each outer package material type and design must be tested, including any features such as...
49 CFR Appendix D to Part 178 - Thermal Resistance Test
Code of Federal Regulations, 2011 CFR
2011-10-01
... large enough in size to fully house the test outer package without clearance problems. The test oven....3Instrumentation. A calibrated recording device or a computerized data acquisition system with an appropriate range... Configuration. Each outer package material type and design must be tested, including any features such as...
49 CFR Appendix D to Part 178 - Thermal Resistance Test
Code of Federal Regulations, 2012 CFR
2012-10-01
... large enough in size to fully house the test outer package without clearance problems. The test oven....3Instrumentation. A calibrated recording device or a computerized data acquisition system with an appropriate range... Configuration. Each outer package material type and design must be tested, including any features such as...
2017-01-01
Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297
Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X
2017-01-01
Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Space construction base control system
NASA Technical Reports Server (NTRS)
1978-01-01
Aspects of an attitude control system were studied and developed for a large space base that is structurally flexible and whose mass properties change rather dramatically during its orbital lifetime. Topics of discussion include the following: (1) space base orbital pointing and maneuvering; (2) angular momentum sizing of actuators; (3) momentum desaturation selection and sizing; (4) multilevel control technique applied to configuration one; (5) one-dimensional model simulation; (6) N-body discrete coordinate simulation; (7) structural analysis math model formulation; and (8) discussion of control problems and control methods.
When is bigger better? The effects of group size on the evolution of helping behaviours.
Powers, Simon T; Lehmann, Laurent
2017-05-01
Understanding the evolution of sociality in humans and other species requires understanding how selection on social behaviour varies with group size. However, the effects of group size are frequently obscured in the theoretical literature, which often makes assumptions that are at odds with empirical findings. In particular, mechanisms are suggested as supporting large-scale cooperation when they would in fact rapidly become ineffective with increasing group size. Here we review the literature on the evolution of helping behaviours (cooperation and altruism), and frame it using a simple synthetic model that allows us to delineate how the three main components of the selection pressure on helping must vary with increasing group size. The first component is the marginal benefit of helping to group members, which determines both direct fitness benefits to the actor and indirect fitness benefits to recipients. While this is often assumed to be independent of group size, marginal benefits are in practice likely to be maximal at intermediate group sizes for many types of collective action problems, and will eventually become very small in large groups due to the law of decreasing marginal returns. The second component is the response of social partners on the past play of an actor, which underlies conditional behaviour under repeated social interactions. We argue that under realistic conditions on the transmission of information in a population, this response on past play decreases rapidly with increasing group size so that reciprocity alone (whether direct, indirect, or generalised) cannot sustain cooperation in very large groups. The final component is the relatedness between actor and recipient, which, according to the rules of inheritance, again decreases rapidly with increasing group size. These results explain why helping behaviours in very large social groups are limited to cases where the number of reproducing individuals is small, as in social insects, or where there are social institutions that can promote (possibly through sanctioning) large-scale cooperation, as in human societies. Finally, we discuss how individually devised institutions can foster the transition from small-scale to large-scale cooperative groups in human evolution. © 2016 Cambridge Philosophical Society.
Does group size have an impact on welfare indicators in fattening pigs?
Meyer-Hamme, S E K; Lambertz, C; Gauly, M
2016-01-01
Production systems for fattening pigs have been characterized over the last 2 decades by rising farm sizes coupled with increasing group sizes. These developments resulted in a serious public discussion regarding animal welfare and health in these intensive production systems. Even though large farm and group sizes came under severe criticism, it is still unknown whether these factors indeed negatively affect animal welfare. Therefore, the aim of this study was to assess the effect of group size (30 pigs/pen) on various animal-based measures of the Welfare Quality(®) protocol for growing pigs under conventional fattening conditions. A total of 60 conventional pig fattening farms with different group sizes in Germany were included. Moderate bursitis (35%) was found as the most prevalent indicator of welfare-related problems, while its prevalence increased with age during the fattening period. However, differences between group sizes were not detected (P>0.05). The prevalence of moderately soiled bodies increased from 9.7% at the start to 14.2% at the end of the fattening period, whereas large pens showed a higher prevalence (15.8%) than small pens (10.4%; P<0.05). With increasing group size, the incidence of moderate wounds with 8.5% and 11.3% in small- and medium-sized pens, respectively, was lower (P<0.05) than in large-sized ones (16.3%). Contrary to bursitis and dirtiness, its prevalence decreased during the fattening period. Moderate manure was less often found in pigs fed by a dry feeder than in those fed by a liquid feeding system (P<0.05). The human-animal relationship was improved in large in comparison to small groups. On the contrary, negative social behaviour was found more often in large groups. Exploration of enrichment material decreased with increasing live weight. Given that all animals were tail-docked, tail biting was observed at a very low rate of 1.9%. In conclusion, the results indicate that BW and feeding system are determining factors for the welfare status, while group size was not proved to affect the welfare level under the studied conditions of pig fattening.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
A networked voting rule for democratic representation
NASA Astrophysics Data System (ADS)
Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir
2018-03-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.
Problematic Peer Functioning in Girls with ADHD: A Systematic Literature Review.
Kok, Francien M; Groen, Yvonne; Fuermaier, Anselm B M; Tucha, Oliver
2016-01-01
Children with attention deficit hyperactivity disorder (ADHD) experience many peer interaction problems and are at risk of peer rejection and victimisation. Although many studies have investigated problematic peer functioning in children with ADHD, this research has predominantly focused on boys and studies investigating girls are scant. Those studies that did examine girls, often used a male comparison sample, disregarding the inherent gender differences between girls and boys. Previous studies have highlighted this limitation and recommended the need for comparisons between ADHD females and typical females, in order to elucidate the picture of female ADHD with regards to problematic peer functioning. The aim of this literature review was to gain insight into peer functioning difficulties in school-aged girls with ADHD. PsychINFO, PubMed, and Web of Knowledge were searched for relevant literature comparing school-aged girls with ADHD to typically developing girls (TDs) in relation to peer functioning. The peer relationship domains were grouped into 'friendship', 'peer status', 'social skills/competence', and 'peer victimisation and bullying'. In total, thirteen studies were included in the review. All of the thirteen studies included reported that girls with ADHD, compared to TD girls, demonstrated increased difficulties in the domains of friendship, peer interaction, social skills and functioning, peer victimization and externalising behaviour. Studies consistently showed small to medium effects for lower rates of friendship participation and stability in girls with ADHD relative to TD girls. Higher levels of peer rejection with small to large effect sizes were reported in all studies, which were predicted by girls' conduct problems. Peer rejection in turn predicted poor social adjustment and a host of problem behaviours. Very high levels of peer victimisation were present in girls with ADHD with large effect sizes. Further, very high levels of social impairment and social skills deficits, with large effect sizes, were found across all studies. Levels of pro-social behaviour varied across studies, but were mostly lower in girls with ADHD, with small to large effect sizes. Overall, social disability was significantly higher among girls with ADHD than among TD girls. Congruous evidence was found for peer functioning difficulties in the peer relationship domains of friendship, peer status, social skills/competence, and peer victimisation and bullying in girls with ADHD.
Problematic Peer Functioning in Girls with ADHD: A Systematic Literature Review
Groen, Yvonne; Fuermaier, Anselm B. M.; Tucha, Oliver
2016-01-01
Objective Children with attention deficit hyperactivity disorder (ADHD) experience many peer interaction problems and are at risk of peer rejection and victimisation. Although many studies have investigated problematic peer functioning in children with ADHD, this research has predominantly focused on boys and studies investigating girls are scant. Those studies that did examine girls, often used a male comparison sample, disregarding the inherent gender differences between girls and boys. Previous studies have highlighted this limitation and recommended the need for comparisons between ADHD females and typical females, in order to elucidate the picture of female ADHD with regards to problematic peer functioning. The aim of this literature review was to gain insight into peer functioning difficulties in school-aged girls with ADHD. Methods PsychINFO, PubMed, and Web of Knowledge were searched for relevant literature comparing school-aged girls with ADHD to typically developing girls (TDs) in relation to peer functioning. The peer relationship domains were grouped into ‘friendship’, ‘peer status’, ‘social skills/competence’, and ‘peer victimisation and bullying’. In total, thirteen studies were included in the review. Results All of the thirteen studies included reported that girls with ADHD, compared to TD girls, demonstrated increased difficulties in the domains of friendship, peer interaction, social skills and functioning, peer victimization and externalising behaviour. Studies consistently showed small to medium effects for lower rates of friendship participation and stability in girls with ADHD relative to TD girls. Higher levels of peer rejection with small to large effect sizes were reported in all studies, which were predicted by girls’ conduct problems. Peer rejection in turn predicted poor social adjustment and a host of problem behaviours. Very high levels of peer victimisation were present in girls with ADHD with large effect sizes. Further, very high levels of social impairment and social skills deficits, with large effect sizes, were found across all studies. Levels of pro-social behaviour varied across studies, but were mostly lower in girls with ADHD, with small to large effect sizes. Overall, social disability was significantly higher among girls with ADHD than among TD girls. Conclusion Congruous evidence was found for peer functioning difficulties in the peer relationship domains of friendship, peer status, social skills/competence, and peer victimisation and bullying in girls with ADHD. PMID:27870862
Applications of large-scale density functional theory in biology
NASA Astrophysics Data System (ADS)
Cole, Daniel J.; Hine, Nicholas D. M.
2016-10-01
Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Radiative corrections to quantum sticking on graphene
NASA Astrophysics Data System (ADS)
Sengupta, Sanghita; Clougherty, Dennis P.
2017-07-01
We study the sticking rate of atomic hydrogen to suspended graphene using four different methods that include contributions from processes with multiphonon emission. We compare the numerical results of the sticking rate obtained by: (i) the loop expansion of the atom self-energy; (ii) the noncrossing approximation (NCA); (iii) the independent boson model approximation (IBMA); and (iv) a leading-order soft-phonon resummation method (SPR). The loop expansion reveals an infrared problem, analogous to the infamous infrared problem in QED. The two-loop contribution to the sticking rate gives a result that tends to diverge for large membranes. The latter three methods remedy this infrared problem and give results that are finite in the limit of an infinite membrane. We find that for micromembranes (sizes ranging 100 nm to 10 μ m ), the latter three methods give results that are in good agreement with each other and yield sticking rates that are mildly suppressed relative to the lowest-order golden rule rate. Lastly, we find that the SPR sticking rate decreases slowly to zero with increasing membrane size, while both the NCA and IBMA rates tend to a nonzero constant in this limit. Thus, approximations to the sticking rate can be sensitive to the effects of soft-phonon emission for large membranes.
NASA Astrophysics Data System (ADS)
Mashood, K. K.; Singh, Vijay A.
2013-09-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in highly competitive problem-solving examinations was studied using a massive database. The sample sizes ranged from hundreds to a few hundred thousand. Encouraged by the presence of significant correlations, we interviewed 20 students to explore the pedagogic potential of physics in imparting transferable problem-solving skills. We report strategies and practices relevant to physics employed by these students which foster transfer.
Sundström, Christopher; Kraepelien, Martin; Eék, Niels; Fahlke, Claudia; Kaldo, Viktor; Berman, Anne H
2017-05-26
A large proportion of individuals with alcohol problems do not seek psychological treatment, but access to such treatment could potentially be increased by delivering it over the Internet. Cognitive behavior therapy (CBT) is widely recognized as one of the psychological treatments for alcohol problems for which evidence is most robust. This study evaluated a new, therapist-guided internet-based CBT program (entitled ePlus) for individuals with alcohol use disorders. Participants in the study (n = 13) were recruited through an alcohol self-help web site ( www.alkoholhjalpen.se ) and, after initial internet screening, were diagnostically assessed by telephone. Eligible participants were offered access to the therapist-guided 12-week program. The main outcomes were treatment usage data (module completion, treatment satisfaction) as well as glasses of alcohol consumed the preceding week, measured with the self-rated Timeline Followback (TLFB). Participant data were collected at screening (T0), immediately pre-treatment (T1), post-treatment (T2) and 3 months post-treatment (T3). Most participants were active throughout the treatment and found it highly acceptable. Significant reductions in alcohol consumption with a large within-group effect size were found at the three-month follow-up. Secondary outcome measures of craving and self-efficacy, as well as depression and quality of life, also showed significant improvements with moderate to large within-group effect sizes. Therapist-guided internet-based CBT may be a feasible and effective alternative for people with alcohol use disorders. In view of the high acceptability and the large within-group effect sizes found in this small pilot, a randomized controlled trial investigating treatment efficacy is warranted. ClinicalTrials.gov ( NCT02384278 , February 26, 2015).
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
Effect of drop size on the impact thermodynamics for supercooled large droplet in aircraft icing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chen; Liu, Hong, E-mail: hongliu@sjtu.edu.cn
Supercooled large droplet (SLD), which can cause abnormal icing, is a well-known issue in aerospace engineering. Although efforts have been exerted to understand large droplet impact dynamics and the supercooled feature in the film/substrate interface, respectively, the thermodynamic effect during the SLD impact process has not received sufficient attention. This work conducts experimental studies to determine the effects of drop size on the thermodynamics for supercooled large droplet impingement. Through phenomenological reproduction, the rapid-freezing characteristics are observed in diameters of 400, 800, and 1300 μm. The experimental analysis provides information on the maximum spreading rate and the shrinkage rate ofmore » the drop, the supercooled diffusive rate, and the freezing time. A physical explanation of this unsteady heat transfer process is proposed theoretically, which indicates that the drop size is a critical factor influencing the supercooled heat exchange and effective heat transfer duration between the film/substrate interface. On the basis of the present experimental data and theoretical analysis, an impinging heating model is developed and applied to typical SLD cases. The model behaves as anticipated, which underlines the wide applicability to SLD icing problems in related fields.« less
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Effect of H-wave polarization on laser radar detection of partially convex targets in random media.
El-Ocla, Hosam
2010-07-01
A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
Waves of Hope: The U.S. Navy’s Response to the Tsunami in Northern Indonesia
2007-02-01
mountain of rice, instant noodles , and crackers sat waiting on the airfield, their delivery hampered by the small size of the airport and limited...Miscommunication and rumor were still rampant. One incident that exemplifies this problem involved a large box of dried noodles that accidentally fell
Easy Implementation of Internet-Based Whiteboard Physics Tutorials
ERIC Educational Resources Information Center
Robinson, Andrew
2008-01-01
The requirement for a method of capturing problem solving on a whiteboard for later replay stems from my teaching load, which includes two classes of first-year university general physics, each with relatively large class sizes of approximately 80-100 students. Most university-level teachers value one-to-one interaction with the students and find…
ERIC Educational Resources Information Center
Magoon, Michael A.; Critchfield, Thomas S.
2008-01-01
Considerable evidence from outside of operant psychology suggests that aversive events exert greater influence over behavior than equal-sized positive-reinforcement events. Operant theory is largely moot on this point, and most operant research is uninformative because of a scaling problem that prevents aversive events and those based on positive…
49 CFR Appendix D to Part 178 - Thermal Resistance Test
Code of Federal Regulations, 2010 CFR
2010-10-01
... must be large enough in size to fully house the test outer package without clearance problems. The test....3Instrumentation. A calibrated recording device or a computerized data acquisition system with an appropriate range... Configuration. Each outer package material type and design must be tested, including any features such as...
Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu
2006-04-17
TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Blackmore, Lars; Wolf, Michael; Fathpour, Nanaz; Newman, Claire; Elfes, Alberto
2009-01-01
Hot air (Montgolfiere) balloons represent a promising vehicle system for possible future exploration of planets and moons with thick atmospheres such as Venus and Titan. To go to a desired location, this vehicle can primarily use the horizontal wind that varies with altitude, with a small help of its own actuation. A main challenge is how to plan such trajectory in a highly nonlinear and time-varying wind field. This paper poses this trajectory planning as a graph search on the space-time grid and addresses its computational aspects. When capturing various time scales involved in the wind field over the duration of long exploration mission, the size of the graph becomes excessively large. We show that the adjacency matrix of the graph is block-triangular, and by exploiting this structure, we decompose the large planning problem into several smaller subproblems, whose memory requirement stays almost constant as the problem size grows. The approach is demonstrated on a global reachability analysis of a possible Titan mission scenario.
Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.
Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan
2011-11-01
Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).
Opera: Reconstructing Optimal Genomic Scaffolds with High-Throughput Paired-End Sequences
Gao, Song; Sung, Wing-Kin
2011-01-01
Abstract Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/). PMID:21929371
Confronting Practical Problems for Initiation of On-line Hemodiafiltration Therapy.
Kim, Yang Wook; Park, Sihyung
2016-06-01
Conventional hemodialysis, which is based on the diffusive transport of solutes, is the most widely used renal replacement therapy. It effectively removes small solutes such as urea and corrects fluid, electrolyte and acid-base imbalance. However, solute diffusion coefficients decreased rapidly as molecular size increased. Because of this, middle and large molecules are not removed effectively and clinical problem such as dialysis amyloidosis might occur. Online hemodiafiltration which is combined by diffusive and convective therapies can overcome such problems by removing effectively middle and large solutes. Online hemodiafiltration is safe, very effective, economically affordable, improving session tolerance and may improve the mortality superior to high flux hemodialysis. However, there might be some potential limitations for setting up online hemodiafiltaration. In this article, we review the uremic toxins associated with dialysis, definition of hemodiafiltration, indication and prescription of hemodiafiltration and the limitations of setting up hemodiafiltration.
Matching by linear programming and successive convexification.
Jiang, Hao; Drew, Mark S; Li, Ze-Nian
2007-06-01
We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.
An unbalanced spectra classification method based on entropy
NASA Astrophysics Data System (ADS)
Liu, Zhong-bao; Zhao, Wen-juan
2017-05-01
How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Psychosocial correlates of the perceived stigma of problem drinking in the workplace.
Reynolds, G Shawn; Lehman, Wayne E K; Bennett, Joel B
2008-07-01
The purpose of this study was to evaluate a questionnaire assessment of the perceived stigma of problem drinking that was designed for use in workplace substance abuse prevention research. Municipal employees from a mid-sized city (n = 315) and a large-sized city (n = 535) completed questionnaire measures of perceived coworker stigmatization of problem drinking, drinking levels, substance-use policy attitudes, workgroup stress and interdependence, alcohol-tolerance norms, and demographic variables. Inter-item correlation coefficients showed that the measure of the stigma of problem drinking had good internal consistency reliability (.76) in both samples. Hierarchical regression analyses showed that higher education, abstinence from alcohol, stress, and perceived temperance norms were all uniquely correlated with perceived stigma. Women and men perceived the same level of stigma from coworkers. Editors' Strategic Implications: This brief, validated measure provides organizations with a way to assess the level of stigma attached to alcohol abuse in their workplace culture, thereby enabling the organization to target and promote effective strategies to decrease the stigma attached to seeking help with the goal of reducing alcohol abuse.
Statistical challenges in a regulatory review of cardiovascular and CNS clinical trials.
Hung, H M James; Wang, Sue-Jane; Yang, Peiling; Jin, Kun; Lawrence, John; Kordzakhia, George; Massie, Tristan
2016-01-01
There are several challenging statistical problems identified in the regulatory review of large cardiovascular (CV) clinical outcome trials and central nervous system (CNS) trials. The problems can be common or distinct due to disease characteristics and the differences in trial design elements such as endpoints, trial duration, and trial size. In schizophrenia trials, heavy missing data is a big problem. In Alzheimer trials, the endpoints for assessing symptoms and the endpoints for assessing disease progression are essentially the same; it is difficult to construct a good trial design to evaluate a test drug for its ability to slow the disease progression. In CV trials, reliance on a composite endpoint with low event rate makes the trial size so large that it is infeasible to study multiple doses necessary to find the right dose for study patients. These are just a few typical problems. In the past decade, adaptive designs were increasingly used in these disease areas and some challenges occur with respect to that use. Based on our review experiences, group sequential designs (GSDs) have borne many successful stories in CV trials and are also increasingly used for developing treatments targeting CNS diseases. There is also a growing trend of using more advanced unblinded adaptive designs for producing efficacy evidence. Many statistical challenges with these kinds of adaptive designs have been identified through our experiences with the review of regulatory applications and are shared in this article.
Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard
2013-04-01
The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.
Simulating ground water-lake interactions: Approaches and insights
Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.
2003-01-01
Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.
Systems engineering for very large systems
NASA Technical Reports Server (NTRS)
Lewkowicz, Paul E.
1993-01-01
Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.
Systems engineering for very large systems
NASA Astrophysics Data System (ADS)
Lewkowicz, Paul E.
Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.
Deployment of Large-Size Shell Constructions by Internal Pressure
NASA Astrophysics Data System (ADS)
Pestrenin, V. M.; Pestrenina, I. V.; Rusakov, S. V.; Kondyurin, A. V.
2015-11-01
A numerical study on the deployment pressure (the minimum internal pressure bringing a construction from the packed state to the operational one) of large laminated CFRP shell structures is performed using the ANSYS engineering package. The shell resists both membrane and bending deformations. Structures composed of shell elements whose median surface has an involute are considered. In the packed (natural) states of constituent elements, the median surfaces coincide with their involutes. Criteria for the termination of stepwise solution of the geometrically nonlinear problem on determination of the deployment pressure are formulated, and the deployment of cylindrical, conical (full and truncated cones), and large-size composite shells is studied. The results obtained are shown by graphs illustrating the deployment pressure in relation to the geometric and material parameters of the structure. These studies show that large pneumatic composite shells can be used as space and building structures, because the deployment pressure in them only slightly differs from the excess pressure in pneumatic articles made from films and soft materials.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
NASA Astrophysics Data System (ADS)
Zroichikov, N. A.; Lyskov, M. G.; Prokhorov, V. B.; Morozova, E. A.
2007-06-01
We describe a proposed small-size cavitator having an adjustable flow cross-section, which allows the dispersion of water-fuel oil emulsion to be varied during the operation of large-capacity boilers. It is shown that the operating conditions of the boiler must be synchronized with those of the cavitator if we wish the problem of reducing the amount of harmful substances emitted into the atmosphere during the combustion of fuel oil to be solved in a comprehensive manner.
Scale-Up: Improving Large Enrollment Physics Courses
NASA Astrophysics Data System (ADS)
Beichner, Robert
1999-11-01
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.
Chefs' opinions of restaurant portion sizes.
Condrasky, Marge; Ledikwe, Jenny H; Flood, Julie E; Rolls, Barbara J
2007-08-01
The objectives were to determine who establishes restaurant portion sizes and factors that influence these decisions, and to examine chefs' opinions regarding portion size, nutrition information, and weight management. A survey was distributed to chefs to obtain information about who is responsible for determining restaurant portion sizes, factors influencing restaurant portion sizes, what food portion sizes are being served in restaurants, and chefs' opinions regarding nutrition information, health, and body weight. The final sample consisted of 300 chefs attending various culinary meetings. Executive chefs were identified as being primarily responsible for establishing portion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the U.S government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customer's responsibility to eat an appropriate amount when served a large portion of food. Portion size is a key determinant of energy intake, and the results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes. Strategies are needed to encourage chefs to provide and promote portions that are appropriate for customers' energy requirements.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
Moore, R. Davis; Drollette, Eric S.; Scudder, Mark R.; Bharij, Aashiv; Hillman, Charles H.
2014-01-01
The current study investigated the influence of cardiorespiratory fitness on arithmetic cognition in forty 9–10 year old children. Measures included a standardized mathematics achievement test to assess conceptual and computational knowledge, self-reported strategy selection, and an experimental arithmetic verification task (including small and large addition problems), which afforded the measurement of event-related brain potentials (ERPs). No differences in math achievement were observed as a function of fitness level, but all children performed better on math concepts relative to math computation. Higher fit children reported using retrieval more often to solve large arithmetic problems, relative to lower fit children. During the arithmetic verification task, higher fit children exhibited superior performance for large problems, as evidenced by greater d' scores, while all children exhibited decreased accuracy and longer reaction time for large relative to small problems, and incorrect relative to correct solutions. On the electrophysiological level, modulations of early (P1, N170) and late ERP components (P3, N400) were observed as a function of problem size and solution correctness. Higher fit children exhibited selective modulations for N170, P3, and N400 amplitude relative to lower fit children, suggesting that fitness influences symbolic encoding, attentional resource allocation and semantic processing during arithmetic tasks. The current study contributes to the fitness-cognition literature by demonstrating that the benefits of cardiorespiratory fitness extend to arithmetic cognition, which has important implications for the educational environment and the context of learning. PMID:24829556
Aras, N; Altinel, I K; Oommen, J
2003-01-01
In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Tether Impact Rate Simulation and Prediction with Orbiting Satellites
NASA Technical Reports Server (NTRS)
Harrison, Jim
2002-01-01
Space elevators and other large space structures have been studied and proposed as worthwhile by futuristic space planners for at least a couple of decades. In June 1999 the Marshall Space Flight Center sponsored a Space Elevator workshop in Huntsville, Alabama, to bring together technical experts and advanced planners to discuss the current status and to define the magnitude of the technical and programmatic problems connected with the development of these massive space systems. One obvious problem that was identified, although not for the first time, were the collision probabilities between space elevators and orbital debris. Debate and uncertainty presently exist about the extent of the threat to these large structures, one in this study as large in size as a space elevator. We have tentatively concluded that orbital debris although a major concern not sufficient justification to curtail the study and development of futuristic new millennium concepts like the space elevators.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Efficient bulk-loading of gridfiles
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Nicol, David M.
1994-01-01
This paper considers the problem of bulk-loading large data sets for the gridfile multiattribute indexing technique. We propose a rectilinear partitioning algorithm that heuristically seeks to minimize the size of the gridfile needed to ensure no bucket overflows. Empirical studies on both synthetic data sets and on data sets drawn from computational fluid dynamics applications demonstrate that our algorithm is very efficient, and is able to handle large data sets. In addition, we present an algorithm for bulk-loading data sets too large to fit in main memory. Utilizing a sort of the entire data set it creates a gridfile without incurring any overflows.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
NASA Astrophysics Data System (ADS)
Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine
2017-06-01
The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.
2014-04-01
randomization design, after all patients are treated with dermal matrix, patients will be randomized to Arm 1 (control group; standard skin grafting with... grafts are often “meshed” or flattened and spread out to increase the size of the skin graft to better cover a large wound. Standard “meshing” increases...the size of the donor graft by 1.5 times (1:1.5). Problems with healing and skin irritation remain with such skin grafts when the injured areas are
Particle-size segregation and diffusive remixing in shallow granular avalanches
NASA Astrophysics Data System (ADS)
Gray, J. M. N. T.; Chugunov, V. A.
2006-12-01
Segregation and mixing of dissimilar grains is a problem in many industrial and pharmaceutical processes, as well as in hazardous geophysical flows, where the size-distribution can have a major impact on the local rheology and the overall run-out. In this paper, a simple binary mixture theory is used to formulate a model for particle-size segregation and diffusive remixing of large and small particles in shallow gravity-driven free-surface flows. This builds on a recent theory for the process of kinetic sieving, which is the dominant mechanism for segregation in granular avalanches provided the density-ratio and the size-ratio of the particles are not too large. The resulting nonlinear parabolic segregation remixing equation reduces to a quasi-linear hyperbolic equation in the no-remixing limit. It assumes that the bulk velocity is incompressible and that the bulk pressure is lithostatic, making it compatible with most theories used to compute the motion of shallow granular free-surface flows. In steady-state, the segregation remixing equation reduces to a logistic type equation and the ‘S’-shaped solutions are in very good agreement with existing particle dynamics simulations for both size and density segregation. Laterally uniform time-dependent solutions are constructed by mapping the segregation remixing equation to Burgers equation and using the Cole Hopf transformation to linearize the problem. It is then shown how solutions for arbitrary initial conditions can be constructed using standard methods. Three examples are investigated in which the initial concentration is (i) homogeneous, (ii) reverse graded with the coarse grains above the fines, and, (iii) normally graded with the fines above the coarse grains. Time-dependent two-dimensional solutions are also constructed for plug-flow in a semi-infinite chute.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Classification of brain MRI with big data and deep 3D convolutional neural networks
NASA Astrophysics Data System (ADS)
Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim
2018-02-01
Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.
Solution of a large hydrodynamic problem using the STAR-100 computer
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Howser, L. M.
1976-01-01
A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.
Magnetic suspension and balance systems (MSBSs)
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Kilgore, Robert A.
1987-01-01
The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.
Problem analysis of geotechnical well drilling in complex environment
NASA Astrophysics Data System (ADS)
Kasenov, A. K.; Biletskiy, M. T.; Ratov, B. T.; Korotchenko, T. V.
2015-02-01
The article examines primary causes of problems occurring during the drilling of geotechnical wells (injection, production and monitoring wells) for in-situ leaching to extract uranium in South Kazakhstan. Such a drilling problem as hole caving which is basically caused by various chemical and physical factors (hydraulic, mechanical, etc.) has been thoroughly investigated. The analysis of packing causes has revealed that this problem usually occurs because of insufficient amount of drilling mud being associated with small cross section downward flow and relatively large cross section upward flow. This is explained by the fact that when spear bores are used to drill clay rocks, cutting size is usually rather big and there is a risk for clay particles to coagulate.
A Simulation Study of Paced TCP
NASA Technical Reports Server (NTRS)
Kulik, Joanna; Coulter, Robert; Rockwell, Dennis; Partridge, Craig
2000-01-01
In this paper, we study the performance of paced TCP, a modified version of TCP designed especially for high delay- bandwidth networks. In typical networks, TCP optimizes its send-rate by transmitting increasingly large bursts, or windows, of packets, one burst per round-trip time, until it reaches a maximum window-size, which corresponds to the full capacity of the network. In a network with a high delay-bandwidth product, however, Transmission Control Protocol's (TCPs) maximum window-size may be larger than the queue size of the intermediate routers, and routers will begin to drop packets as soon as the windows become too large for the router queues. The TCP sender then concludes that the bottleneck capacity of the network has been reached, and it limits its send-rate accordingly. Partridge proposed paced TCP as a means of solving the problem of queueing bottlenecks. A sender using paced TCP would release packets in multiple, small bursts during a round-trip time in which ordinary TCP would release a single, large burst of packets. This approach allows the sender to increase its send-rate to the maximum window size without encountering queueing bottlenecks. This paper describes the performance of paced TCP in a simulated network and discusses implementation details that can affect the performance of paced TCP.
Moisture adsorption in optical coatings
NASA Technical Reports Server (NTRS)
Macleod, H. Angus
1988-01-01
The thin film filter is a very large aperture component which is exceedingly useful because of its small size, flexibility and ease of mounting. Thin film components, however, do have defects of performance and especially of stability which can cause problems in systems, particularly where long-term measurements are being made. Of all of the problems, those associated with moisture absorption are the most serious. Moisture absorption occurs in the pore-shaped voids inherent in the columnar structure of the layers. Ion-assisted deposition is a promising technique for substantially reducing moisture adsorption effects in thin film structures.
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Engaging Secondary Students in Collaborative Action-Oriented Inquiry: Challenges and Opportunities
ERIC Educational Resources Information Center
Clark, J. Spencer
2017-01-01
In this article, the author describes a collaborative problem-based inquiry project with eighty-three secondary students. The students attended a large high school situated in a medium size town, surrounded by farmland and smaller rural towns. Demographically, nearly half of the students identified as Latina/o, while the slight majority of the…
Strategies for Sustaining Quality in PBL Facilitation for Large Student Cohorts
ERIC Educational Resources Information Center
Young, Louise; Papinczak, Tracey
2013-01-01
Problem-based learning (PBL) has been used to scaffold and support student learning in many Australian medical programs, with the role of the facilitator in the process considered crucial to the overall educational experience of students. With the increasing size of student cohorts and in an environment of financial constraint, it is important to…
ERIC Educational Resources Information Center
Camfield, Eileen Kogl; McFall, Eileen Eckert; Land, Kirkwood M.
2016-01-01
Introductory biology courses are supposed to serve as gateways for many majors, but too often they serve instead as gatekeepers. Reliance on lectures, large classes, and multiple-choice tests results in high drop and failure rates. Critiques of undergraduate science education are clear about the problems with conventional introductory science…
Khan, Wasim S; Rayan, Faizal; Dhinsa, Baljinder S; Marsh, David
2012-01-01
The management of large bone defects due to trauma, degenerative disease, congenital deformities, and tumor resection remains a complex issue for the orthopaedic reconstructive surgeons. The requirement is for an ideal bone replacement which is osteoconductive, osteoinductive, and osteogenic. Autologous bone grafts are still considered the gold standard for reconstruction of bone defects, but donor site morbidity and size limitations are major concern. The use of bioartificial bone tissues may help to overcome these problems. The reconstruction of large volume defects remains a challenge despite the success of reconstruction of small-to-moderate-sized bone defects using engineered bone tissues. The aim of this paper is to understand the principles of tissue engineering of bone and its clinical applications in reconstructive surgery.
Khan, Wasim S.; Rayan, Faizal; Dhinsa, Baljinder S.; Marsh, David
2012-01-01
The management of large bone defects due to trauma, degenerative disease, congenital deformities, and tumor resection remains a complex issue for the orthopaedic reconstructive surgeons. The requirement is for an ideal bone replacement which is osteoconductive, osteoinductive, and osteogenic. Autologous bone grafts are still considered the gold standard for reconstruction of bone defects, but donor site morbidity and size limitations are major concern. The use of bioartificial bone tissues may help to overcome these problems. The reconstruction of large volume defects remains a challenge despite the success of reconstruction of small-to-moderate-sized bone defects using engineered bone tissues. The aim of this paper is to understand the principles of tissue engineering of bone and its clinical applications in reconstructive surgery. PMID:25098363
Why might they be giants? Towards an understanding of polar gigantism.
Moran, Amy L; Woods, H Arthur
2012-06-15
Beginning with the earliest expeditions to the poles, over 100 years ago, scientists have compiled an impressive list of polar taxa whose body sizes are unusually large. This phenomenon has become known as 'polar gigantism'. In the intervening years, biologists have proposed a multitude of hypotheses to explain polar gigantism. These hypotheses run the gamut from invoking release from physical and physiological constraints, to systematic changes in developmental trajectories, to community-level outcomes of broader ecological and evolutionary processes. Here we review polar gigantism and emphasize two main problems. The first is to determine the true strength and generality of this pattern: how prevalent is polar gigantism across taxonomic units? Despite many published descriptions of polar giants, we still have a poor grasp of whether these species are unusual outliers or represent more systematic shifts in distributions of body size. Indeed, current data indicate that some groups show gigantism at the poles whereas others show nanism. The second problem is to identify underlying mechanisms or processes that could drive taxa, or even just allow them, to evolve especially large body size. The contenders are diverse and no clear winner has yet emerged. Distinguishing among the contenders will require better sampling of taxa in both temperate and polar waters and sustained efforts by comparative physiologists and evolutionary ecologists in a strongly comparative framework.
Predicting protein structures with a multiplayer online game.
Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit
2010-08-05
People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.
Teaching Medium-Sized ERP Systems - A Problem-Based Learning Approach
NASA Astrophysics Data System (ADS)
Winkelmann, Axel; Matzner, Martin
In order to increase the diversity in IS education, we discuss an approach for teaching medium-sized ERP systems in master courses. Many of today's IS curricula are biased toward large ERP packages. Nevertheless, these ERP systems are only a part of the ERP market. Hence, this chapter describes a course outline for a course on medium-sized ERP systems. Students had to study, analyze, and compare five different ERP systems during a semester. The chapter introduces a procedure model and scenario for setting up similar courses at other universities. Furthermore, it describes some of the students' outcomes and evaluates the contribution of the course with regard to a practical but also academic IS education.
NASA Astrophysics Data System (ADS)
Shu, Feng; Liu, Xingwen; Li, Min
2018-05-01
Memory is an important factor on the evolution of cooperation in spatial structure. For evolutionary biologists, the problem is often how cooperation acts can emerge in an evolving system. In the case of snowdrift game, it is found that memory can boost cooperation level for large cost-to-benefit ratio r, while inhibit cooperation for small r. Thus, how to enlarge the range of r for the purpose of enhancing cooperation becomes a hot issue recently. This paper addresses a new memory-based approach and its core lies in: Each agent applies the given rule to compare its own historical payoffs in a certain memory size, and take the obtained maximal one as virtual payoff. In order to get the optimal strategy, each agent randomly selects one of its neighbours to compare their virtual payoffs, which can lead to the optimal strategy. Both constant-size memory and size-varying memory are investigated by means of a scenario of asynchronous updating algorithm on regular lattices with different sizes. Simulation results show that this approach effectively enhances cooperation level in spatial structure and makes the high cooperation level simultaneously emerge for both small and large r. Moreover, it is discovered that population sizes have a significant influence on the effects of cooperation.
Thünken, Timo; Meuthen, Denis; Bakker, Theo C M; Baldauf, Sebastian A
2012-08-07
Mating preferences for genetic compatibility strictly depend on the interplay of the genotypes of potential partners and are therein fundamentally different from directional preferences for ornamental secondary sexual traits. Thus, the most compatible partner is on average not the one with most pronounced ornaments and vice versa. Hence, mating preferences may often conflict. Here, we present a solution to this problem while investigating the interplay of mating preferences for relatedness (a compatibility criterion) and large body size (an ornamental or quality trait). In previous experiments, both sexes of Pelvicachromis taeniatus, a cichlid fish with mutual mate choice, showed preferences for kin and large partners when these criteria were tested separately. In the present study, test fish were given a conflicting choice between two potential mating partners differing in relatedness as well as in body size in such a way that preferences for both criteria could not simultaneously be satisfied. We show that a sex-specific trade-off occurs between mating preferences for body size and relatedness. For females, relatedness gained greater importance than body size, whereas the opposite was true for males. We discuss the potential role of the interplay between mating preferences for relatedness and body size for the evolution of inbreeding preference.
Thünken, Timo; Meuthen, Denis; Bakker, Theo C. M.; Baldauf, Sebastian A.
2012-01-01
Mating preferences for genetic compatibility strictly depend on the interplay of the genotypes of potential partners and are therein fundamentally different from directional preferences for ornamental secondary sexual traits. Thus, the most compatible partner is on average not the one with most pronounced ornaments and vice versa. Hence, mating preferences may often conflict. Here, we present a solution to this problem while investigating the interplay of mating preferences for relatedness (a compatibility criterion) and large body size (an ornamental or quality trait). In previous experiments, both sexes of Pelvicachromis taeniatus, a cichlid fish with mutual mate choice, showed preferences for kin and large partners when these criteria were tested separately. In the present study, test fish were given a conflicting choice between two potential mating partners differing in relatedness as well as in body size in such a way that preferences for both criteria could not simultaneously be satisfied. We show that a sex-specific trade-off occurs between mating preferences for body size and relatedness. For females, relatedness gained greater importance than body size, whereas the opposite was true for males. We discuss the potential role of the interplay between mating preferences for relatedness and body size for the evolution of inbreeding preference. PMID:22513859
Utoyo, Dharmayati Bambang; Lubis, Dharmayati Utoyo; Jaya, Edo Sebastian; Arjadi, Retha; Hanum, Lathifah; Astri, Kresna; Putri, Maha Decha Dwi
2013-01-01
This research aims to develop evidence based affordable psychological therapy for Indonesian older adults. An affordable psychological therapy is important as there is virtually no managed care or health insurance that covers psychological therapy in Indonesia. Multicomponent group cognitive behavior therapy (GCBGT) was chosen as a starting point due to its extensive evidence, short sessions, and success for a wide range of psychological problems. The group format was chosen to address both the economic and the cultural context of Indonesia. Then, the developed treatment is tested to common psychological problems in older adults' population (anxiety, chronic pain, depression, and insomnia). The treatment consists of 8 sessions with twice a week meetings for 2.5 hours. There are similarities and differences among the techniques used in the treatment for the different psychological problems. The final participants are 38 older adults that are divided into the treatment groups; 8 participants joined the anxiety treatment, 10 participants for the chronic pain treatment, 10 participants for depression treatment, and lastly, 10 participants joined the insomnia treatment. The research design is pre-test post-test with within group analysis. We used principal outcome measure that is specific for each treatment group, as well as additional outcome measures. Overall, the result shows statistical significance change with large effect size for the principal outcome measure. In addition, the result for the additional measures varies from slight improvement with small effect size to statistically significant improvement with large effect size. The result indicates that short multicomponent GCBT is effective in alleviating various common psychological problems in Indonesian older adults. Therefore, multicomponent GCBT may be a good starting point to develop an effective and affordable psychological therapy for Indonesian older adults. Lastly, this result adds to the accumulating body of evidence on the effectiveness of multicomponent GCBT outside western context.
Lubis, Dharmayati Utoyo; Jaya, Edo Sebastian; Arjadi, Retha; Hanum, Lathifah; Astri, Kresna; Putri, Maha Decha Dwi
2013-01-01
This research aims to develop evidence based affordable psychological therapy for Indonesian older adults. An affordable psychological therapy is important as there is virtually no managed care or health insurance that covers psychological therapy in Indonesia. Multicomponent group cognitive behavior therapy (GCBGT) was chosen as a starting point due to its extensive evidence, short sessions, and success for a wide range of psychological problems. The group format was chosen to address both the economic and the cultural context of Indonesia. Then, the developed treatment is tested to common psychological problems in older adults' population (anxiety, chronic pain, depression, and insomnia). The treatment consists of 8 sessions with twice a week meetings for 2.5 hours. There are similarities and differences among the techniques used in the treatment for the different psychological problems. The final participants are 38 older adults that are divided into the treatment groups; 8 participants joined the anxiety treatment, 10 participants for the chronic pain treatment, 10 participants for depression treatment, and lastly, 10 participants joined the insomnia treatment. The research design is pre-test post-test with within group analysis. We used principal outcome measure that is specific for each treatment group, as well as additional outcome measures. Overall, the result shows statistical significance change with large effect size for the principal outcome measure. In addition, the result for the additional measures varies from slight improvement with small effect size to statistically significant improvement with large effect size. The result indicates that short multicomponent GCBT is effective in alleviating various common psychological problems in Indonesian older adults. Therefore, multicomponent GCBT may be a good starting point to develop an effective and affordable psychological therapy for Indonesian older adults. Lastly, this result adds to the accumulating body of evidence on the effectiveness of multicomponent GCBT outside western context. PMID:23437339
Visual Analytics for Power Grid Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Huang, Zhenyu; Chen, Yousu
2014-01-20
Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less
Quantum algorithm for energy matching in hard optimization problems
NASA Astrophysics Data System (ADS)
Baldwin, C. L.; Laumann, C. R.
2018-06-01
We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.
Design and Optimization of Ultrasonic Vibration Mechanism using PZT for Precision Laser Machining
NASA Astrophysics Data System (ADS)
Kim, Woo-Jin; Lu, Fei; Cho, Sung-Hak; Park, Jong-Kweon; Lee, Moon G.
As the aged population grows around the world, many medical instruments and devices have been developed recently. Among the devices, a drug delivery stent is a medical device which requires precision machining. Conventional drug delivery stent has problems of residual polymer and decoating because the drug is coated on the surface of stent with the polymer. If the drug is impregnated in the micro sized holes on the surface, the problems can be overcome because there is no need to use the polymer anymore. Micro sized holes are generally fabricated by laser machining; however, the fabricated holes do not have a high aspect ratio or a good surface finish. To overcome these problems, we propose a vibration-assisted machining mechanism with PZT (Piezoelectric Transducers) for the fabrication of micro sized holes. If the mechanism vibrates the eyepiece of the laser machining head, the laser spot on the workpiece will vibrate vertically because objective lens in the eyepiece shakes by the mechanism's vibration. According to the former researches, the vibrating frequency over 20 kHz and amplitude over 500 nm are preferable. The vibration mechanism has cylindrical guide, hollowed PZT and supports. In the cylinder, the eyepiece is mounted. The cylindrical guide has upper and low plates and side wall. The shape of plates and side wall are designed to have high resonating frequency and large amplitude of motion. The PZT is also selected to have high actuating force and high speed of motion. The support has symmetrical and rigid configuration. The mechanism secures linear motion of the eyepiece. This research includes sensitivity analysis and design of ultrasonic vibration mechanism. As a result of design, the requirements of high frequency and large amplitude are achieved.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
The Nature and Origin of UCDs in the Coma Cluster
NASA Astrophysics Data System (ADS)
Chiboucas, Kristin; Tully, R. Brent; Madrid, Juan; Phillipps, Steven; Carter, David; Peng, Eric
2018-01-01
UCDs are super massive star clusters found largely in dense regions but have also been found around individual galaxies and in smaller groups. Their origin is still under debate but currently favored scenarios include formation as giant star clusters, either as the brightest globular clusters or through mergers of super star clusters, themselves formed during major galaxy mergers, or as remnant nuclei from tidal stripping of nucleated dwarf ellipticals. Establishing the nature of these enigmatic objects has important implications for our understanding of star formation, star cluster formation, the missing satellite problem, and galaxy evolution. We are attempting to disentangle these competing formation scenarios with a large survey of UCDs in the Coma cluster. Using ACS two-passband imaging from the HST/ACS Coma Cluster Treasury Survey, we are using colors and sizes to identify the UCD cluster members. With a large size limited sample of the UCD population within the core region of the Coma cluster, we are investigating the population size, properties, and spatial distribution, and comparing that with the Coma globular cluster and nuclear star cluster populations to discriminate between the threshing and globular cluster scenarios. In previous work, we had found a possible correlation of UCD colors with host galaxy and a possible excess of UCDs around a non-central giant galaxy with an unusually large globular cluster population, both suggestive of a globular cluster origin. With a larger sample size and additional imaging fields that encompass the regions around these giant galaxies, we have found that the color correlation with host persists and the giant galaxy with unusually large globular cluster population does appear to host a large UCD population as well. We present the current status of the survey.
A networked voting rule for democratic representation
Brigatti, Edgardo; Moreno, Yamir
2018-01-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817
Monodisperse Latex Reactor (MLR): A materials processing space shuttle mid-deck payload
NASA Technical Reports Server (NTRS)
Kornfeld, D. M.
1985-01-01
The monodisperse latex reactor experiment has flown five times on the space shuttle, with three more flights currently planned. The objectives of this project is to manufacture, in the microgravity environment of space, large particle-size monodisperse polystyrene latexes in particle sizes larger and more uniform than can be manufactured on Earth. Historically it has been extremely difficult, if not impossible to manufacture in quantity very high quality monodisperse latexes on Earth in particle sizes much above several micrometers in diameter due to buoyancy and sedimentation problems during the polymerization reaction. However the MLR project has succeeded in manufacturing in microgravity monodisperse latex particles as large as 30 micrometers in diameter with a standard deviation of 1.4 percent. It is expected that 100 micrometer particles will have been produced by the completion of the the three remaining flights. These tiny, highly uniform latex microspheres have become the first material to be commercially marketed that was manufactured in space.
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving
Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.
2016-01-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844
A fast least-squares algorithm for population inference
2013-01-01
Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408
A fast least-squares algorithm for population inference.
Parry, R Mitchell; Wang, May D
2013-01-23
Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui
2018-01-01
Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589
Process for preparation of large-particle-size monodisperse latexes
NASA Technical Reports Server (NTRS)
Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)
1981-01-01
Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.
Opportunities for making wood products from small diameter trees in Colorado
Dennis L. Lynch; Kurt H. Mackes
2002-01-01
Colorado's forests are at risk to forest health problems and catastrophic fire. Forest areas at high risk to catastrophic fire, commonly referred to as Red Zones, contain 2.4 million acres in the Colorado Front Range and 6.3 million acres Statewide. The increasing frequency, size, and intensity of recent forest fires have prompted large appropriations of Federal...
Technical Training Requirements of Middle Management in the Greek Textile and Clothing Industries.
ERIC Educational Resources Information Center
Fotinopoulou, K.; Manolopoulos, N.
A case study of 16 companies in the Greek textile and clothing industry elicited the training needs of the industry's middle managers. The study concentrated on large and medium-sized work units, using a lengthy questionnaire. The study found that middle managers increasingly need to solve problems and ensure the reliability of new equipment and…
ERIC Educational Resources Information Center
Keirle, Philip A.; Morgan, Ruth A.
2011-01-01
In this paper we provide a template for transitioning from tutorial to larger-class teaching environments in the discipline of history. We commence by recognising a number of recent trends in tertiary education in Australian universities that have made this transition to larger class sizes an imperative for many academics: increased student…
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today.
What Is the Problem? The Challenge of Providing Effective Teachers for All Children
ERIC Educational Resources Information Center
Murnane, Richard J.; Steele, Jennifer L.
2007-01-01
Richard Murnane and Jennifer Steele argue that if the United States is to equip its young people with the skills essential in the new economy, high-quality teachers are more important than ever. In recent years, the demand for effective teachers has increased as enrollments have risen, class sizes have fallen, and a large share of the teacher…
Simulation shows that HLA-matched stem cell donors can remain unidentified in donor searches
Sauter, Jürgen; Solloch, Ute V.; Giani, Anette S.; Hofmann, Jan A.; Schmidt, Alexander H.
2016-01-01
The heterogeneous nature of HLA information in real-life stem cell donor registries may hamper unrelated donor searches. It is even possible that fully HLA-matched donors with incomplete HLA information are not identified. In our simulation study, we estimated the probability of these unnecessarily failed donor searches. For that purpose, we carried out donor searches in several virtual donor registries. The registries differed by size, composition with respect to HLA typing levels, and genetic diversity. When up to three virtual HLA typing requests were allowed within donor searches, the share of unnecessarily failed donor searches ranged from 1.19% to 4.13%, thus indicating that non-identification of completely HLA-matched stem cell donors is a problem of practical relevance. The following donor registry characteristics were positively correlated with the share of unnecessarily failed donor searches: large registry size, high genetic diversity, and, most strongly correlated, large fraction of registered donors with incomplete HLA typing. Increasing the number of virtual HLA typing requests within donor searches up to ten had a smaller effect. It follows that the problem of donor non-identification can be substantially reduced by complete high-resolution HLA typing of potential donors. PMID:26876789
Fixation and chemical analysis of single fog and rain droplets
NASA Astrophysics Data System (ADS)
Kasahara, M.; Akashi, S.; Ma, C.-J.; Tohno, S.
Last decade, the importance of global environmental problems has been recognized worldwide. Acid rain is one of the most important global environmental problems as well as the global warming. The grasp of physical and chemical properties of fog and rain droplets is essential to make clear the physical and chemical processes of acid rain and also their effects on forests, materials and ecosystems. We examined the physical and chemical properties of single fog and raindrops by applying fixation technique. The sampling method and treatment procedure to fix the liquid droplets as a solid particle were investigated. Small liquid particles like fog droplet could be easily fixed within few minutes by exposure to cyanoacrylate vapor. The large liquid particles like raindrops were also fixed successively, but some of them were not perfect. Freezing method was applied to fix the large raindrops. Frozen liquid particles existed stably by exposure to cyanoacrylate vapor after freezing. The particle size measurement and the elemental analysis of the fixed particle were performed in individual base using microscope, and SEX-EDX, particle-induced X-ray emission (PIXE) and micro-PIXE analyses, respectively. The concentration in raindrops was dependent upon the droplet size and the elapsed time from the beginning of rainfall.
Simulation shows that HLA-matched stem cell donors can remain unidentified in donor searches
NASA Astrophysics Data System (ADS)
Sauter, Jürgen; Solloch, Ute V.; Giani, Anette S.; Hofmann, Jan A.; Schmidt, Alexander H.
2016-02-01
The heterogeneous nature of HLA information in real-life stem cell donor registries may hamper unrelated donor searches. It is even possible that fully HLA-matched donors with incomplete HLA information are not identified. In our simulation study, we estimated the probability of these unnecessarily failed donor searches. For that purpose, we carried out donor searches in several virtual donor registries. The registries differed by size, composition with respect to HLA typing levels, and genetic diversity. When up to three virtual HLA typing requests were allowed within donor searches, the share of unnecessarily failed donor searches ranged from 1.19% to 4.13%, thus indicating that non-identification of completely HLA-matched stem cell donors is a problem of practical relevance. The following donor registry characteristics were positively correlated with the share of unnecessarily failed donor searches: large registry size, high genetic diversity, and, most strongly correlated, large fraction of registered donors with incomplete HLA typing. Increasing the number of virtual HLA typing requests within donor searches up to ten had a smaller effect. It follows that the problem of donor non-identification can be substantially reduced by complete high-resolution HLA typing of potential donors.
A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.
Guerrero, Federico N; Spinelli, Enrique M
2017-10-01
Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
A comparison of SuperLU solvers on the intel MIC architecture
NASA Astrophysics Data System (ADS)
Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.
2016-10-01
In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.; McKee, P.; Corby, R.
2009-12-01
One of the large challenges in data assimilation (DA) into distributed hydrologic models is to reduce the large degrees of freedom involved in the inverse problem to avoid overfitting. To assess the sensitivity of the performance of DA to the dimensionality of the inverse problem, we design and carry out real-world experiments in which the control vector in variational DA (VAR) is solved at different scales in space and time, e.g., lumped, semi-distributed, and fully-distributed in space, and hourly, 6 hourly, etc., in time. The size of the control vector is related to the degrees of freedom in the inverse problem. For the assessment, we use the prototype 4-dimenational variational data assimilator (4DVAR) that assimilates streamflow, precipitation and potential evaporation data into the NWS Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM). In this talk, we present the initial results for a number of basins in Oklahoma and Texas.
Quantum Heterogeneous Computing for Satellite Positioning Optimization
NASA Astrophysics Data System (ADS)
Bass, G.; Kumar, V.; Dulny, J., III
2016-12-01
Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, A; Shashkov, A; Barker, H
This report documents the history of attempts to directly measure cloud extinction, the current measurement device known as the Cloud Extinction Probe (CEP), specific problems with direct measurement of extinction coefficient, and the attempts made here to address these problems. Extinction coefficient is one of the fundamental microphysical parameters characterizing bulk properties of clouds. Knowledge of extinction coefficient is of crucial importance for radiative transfer calculations in weather prediction and climate models given that Earth's radiation budget (ERB) is modulated much by clouds. In order for a large-scale model to properly account for ERB and perturbations to it, it mustmore » ultimately be able to simulate cloud extinction coefficient well. In turn this requires adequate and simultaneous simulation of profiles of cloud water content and particle habit and size. Similarly, remote inference of cloud properties requires assumptions to be made about cloud phase and associated single-scattering properties, of which extinction coefficient is crucial. Hence, extinction coefficient plays an important role in both application and validation of methods for remote inference of cloud properties from data obtained from both satellite and surface sensors (e.g., Barker et al. 2008). While estimation of extinction coefficient within large-scale models is relatively straightforward for pure water droplets, thanks to Mie theory, mixed-phase and ice clouds still present problems. This is because of the myriad forms and sizes that crystals can achieve, each having their own unique extinction properties. For the foreseeable future, large-scale models will have to be content with diagnostic parametrization of crystal size and type. However, before they are able to provide satisfactory values needed for calculation of radiative transfer, they require the intermediate step of assigning single-scattering properties to particles. The most basic of these is extinction coefficient, yet it is rarely measured directly, and therefore verification of parametrizations is difficult. The obvious solution is to be able to measure microphysical properties and extinction at the same time and for the same volume. This is best done by in situ sampling by instruments mounted on either balloon or aircraft. The latter is the usual route and the one employed here. Yet the problem of actually measuring extinction coefficient directly for arbitrarily complicated particles still remains unsolved.« less
Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*
Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.
2011-01-01
The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008
Concurrent credit portfolio losses
Sicking, Joachim; Schäfer, Rudi
2018-01-01
We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector. JEL codes: C32, F34, G21, G32, H81. PMID:29425246
Concurrent credit portfolio losses.
Sicking, Joachim; Guhr, Thomas; Schäfer, Rudi
2018-01-01
We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector. JEL codes: C32, F34, G21, G32, H81.
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
Computerized adaptive testing: the capitalization on chance problem.
Olea, Julio; Barrada, Juan Ramón; Abad, Francisco J; Ponsoda, Vicente; Cuevas, Lara
2012-03-01
This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of theta, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (theta). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sourander, Andre; McGrath, Patrick J; Ristkari, Terja; Cunningham, Charles; Huttunen, Jukka; Lingley-Pottie, Patricia; Hinkka-Yli-Salomäki, Susanna; Kinnunen, Malin; Vuorio, Jenni; Sinokki, Atte; Fossum, Sturla; Unruh, Anita
2016-04-01
There is a large gap worldwide in the provision of evidence-based early treatment of children with disruptive behavioral problems. To determine whether an Internet-assisted intervention using whole-population screening that targets the most symptomatic 4-year-old children is effective at 6 and 12 months after the start of treatment. This 2-parallel-group randomized clinical trial was performed from October 1, 2011, through November 30, 2013, at a primary health care clinic in Southwest Finland. Data analysis was performed from August 6, 2015, to December 11, 2015. Of a screened population of 4656 children, 730 met the screening criteria indicating a high level of disruptive behavioral problems. A total of 464 parents of 4-year-old children were randomized into the Strongest Families Smart Website (SFSW) intervention group (n = 232) or an education control (EC) group (n = 232). The SFSW intervention, an 11-session Internet-assisted parent training program that included weekly telephone coaching. Child Behavior Checklist version for preschool children (CBCL/1.5-5) externalizing scale (primary outcome), other CBCL/1.5-5 scales and subscores, Parenting Scale, Inventory of Callous-Unemotional Traits, and the 21-item Depression, Anxiety, and Stress Scale. All data were analyzed by intention to treat and per protocol. The assessments were made before randomization and 6 and 12 months after randomization. Of the children randomized, 287 (61.9%) were male and 79 (17.1%) lived in other than a family with 2 biological parents. At 12-month follow-up, improvement in the SFSW intervention group was significantly greater compared with the control group on the following measures: CBCL/1.5-5 externalizing scale (effect size, 0.34; P < .001), internalizing scale (effect size, 0.35; P < .001), and total scores (effect size, 0.37; P < .001); 5 of 7 syndrome scales, including aggression (effect size, 0.36; P < .001), sleep (effect size, 0.24; P = .002), withdrawal (effect size, 0.25; P = .005), anxiety (effect size, 0.26; P = .003), and emotional problems (effect size, 0.31; P = .001); Inventory of Callous-Unemotional Traits callousness scores (effect size, 0.19; P = .03); and self-reported parenting skills (effect size, 0.53; P < .001). The study reveals the effectiveness and feasibility of an Internet-assisted parent training intervention offered for parents of preschool children with disruptive behavioral problems screened from the whole population. The strategy of population-based screening of children at an early age to offering parent training using digital technology and telephone coaching is a promising public health strategy for providing early intervention for a variety of child mental health problems. clinicaltrials.gov Identifier: NCT01750996.
Optimal sensor placement for leak location in water distribution networks using genetic algorithms.
Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert
2013-11-04
This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.
Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms
Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert
2013-01-01
This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099
NASA Astrophysics Data System (ADS)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio
2015-06-01
Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.
NASA Astrophysics Data System (ADS)
Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin
2017-09-01
This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.
Meta-analysis of the predictive factors of postpartum fatigue.
Badr, Hanan A; Zauszniewski, Jaclene A
2017-08-01
Nearly 64% of new mothers are affected by fatigue during the postpartum period, making it the most common problem that a woman faces as she adapts to motherhood. Postpartum fatigue can lead to serious negative effects on the mother's health and the newborn's development and interfere with mother-infant interaction. The aim of this meta-analysis was to identify predictive factors of postpartum fatigue and to document the magnitude of their effects using effect sizes. We used two search engines, PubMed and Google Scholar, to identify studies that met three inclusion criteria: (a) the article was written in English, (b) the article studied the predictive factors of postpartum fatigue, and (c) the article included information about the validity and reliability of the instruments used in the research. Nine articles met these inclusion criteria. The direction and strength of correlation coefficients between predictive factors and postpartum fatigue were examined across the studies to determine their effect sizes. Measurement of predictor variables occurred from 3days to 6months postpartum. Correlations reported between predictive factors and postpartum fatigue were as follows: small effect size (r range =0.10 to 0.29) for education level, age, postpartum hemorrhage, infection, and child care difficulties; medium effect size (r range =0.30 to 0.49) for physiological illness, low ferritin level, low hemoglobin level, sleeping problems, stress and anxiety, and breastfeeding problems; and large effect size (r range =0.50+) for depression. Postpartum fatigue is a common condition that can lead to serious health problems for a new mother and her newborn. Therefore, increased knowledge concerning factors that influence the onset of postpartum fatigue is needed for early identification of new mothers who may be at risk. Appropriate treatments, interventions, information, and support can then be initiated to prevent or minimize the postpartum fatigue. Copyright © 2017 Elsevier Inc. All rights reserved.
Development, primacy, and systems of cities.
El-shakhs, S
1972-10-01
The relationship between the evolutionary changes in the city size distribution of nationally defined urban systems and the process of socioeconomic development is examined. Attention is directed to the problems of defining and measuring changes in city size distributions, using the results to test empirically the relationship of such changes to the development process. Existing theoretical structures and empirical generalizations which have tried to explain or to describe, respectively, the hierarchical relationships of cities are represented by central place theory and rank size relationships. The problem is not that deviations exist but that an adequate definition is lacking of urban systems on the 1 hand, and a universal measure of city size distribution, which could be applied to any system irrespective of its level of development, on the other. The problem of measuring changes in city size distributions is further compounded by the lack of sufficient reliable information about different systems of cities for the purposes of empirical comparative analysis. Changes in city size distributions have thus far been viewed largely within the framework of classic equilibrium theory. A more differentiated continuum of the development process should replace the bioplar continuum of underdeveloped developed countries in relating changes in city size distribution with development. Implicit in this distinction is the view that processes which influence spatial organization during the early formative stages of development are inherently different from those operating during the more advanced stages. 2 approaches were used to examine the relationship between national levels of development and primacy: a comparative analysis of a large number of countries at a given point in time; and a historical analysis of a limited sample of 2 advanced countries, the US and Great Britain. The 75 countries included in this study cover a wide range of characteristics. The study found a significant association between the degree of primacy of distributions of cities and their socioeconomic level of development; and the form of the primacy curve (or its evolution with development) seemed to follow a consistent pattern in which the peak of primacy obtained during the stages of socioeconomic transition with countries being less primate in either direction from that peak. This pattern is the result of 2 reverse influences of the development process on the spatial structure of countries--centralization and concentration beginning with the rise of cities and a decentralization and spread effect accompanying the increasing influence and importance of the periphery and structural changes in the pattern of authority.
Massively Scalable Near Duplicate Detection in Streams of Documents using MDSH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, Paul Logasa; Symons, Christopher T; McKenzie, Amber T
2013-01-01
In a world where large-scale text collections are not only becoming ubiquitous but also are growing at increasing rates, near duplicate documents are becoming a growing concern that has the potential to hinder many different information filtering tasks. While others have tried to address this problem, prior techniques have only been used on limited collection sizes and static cases. We will briefly describe the problem in the context of Open Source Intelligence (OSINT) along with our additional constraints for performance. In this work we propose two variations on Multi-dimensional Spectral Hash (MDSH) tailored for working on extremely large, growing setsmore » of text documents. We analyze the memory and runtime characteristics of our techniques and provide an informal analysis of the quality of the near-duplicate clusters produced by our techniques.« less
NGL Viewer: Web-based molecular graphics for large complexes.
Rose, Alexander S; Bradley, Anthony R; Valasatava, Yana; Duarte, Jose M; Prlic, Andreas; Rose, Peter W
2018-05-29
The interactive visualization of very large macromolecular complexes on the web is becoming a challenging problem as experimental techniques advance at an unprecedented rate and deliver structures of increasing size. We have tackled this problem by developing highly memory-efficient and scalable extensions for the NGL WebGL-based molecular viewer and by using MMTF, a binary and compressed Macromolecular Transmission Format. These enable NGL to download and render molecular complexes with millions of atoms interactively on desktop computers and smartphones alike, making it a tool of choice for web-based molecular visualization in research and education. The source code is freely available under the MIT license at github.com/arose/ngl and distributed on NPM (npmjs.com/package/ngl). MMTF-JavaScript encoders and decoders are available at github.com/rcsb/mmtf-javascript. asr.moin@gmail.com.
Gyrodampers for large space structures
NASA Technical Reports Server (NTRS)
Aubrun, J. N.; Margulies, G.
1979-01-01
The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.
NASA Astrophysics Data System (ADS)
Kim, Byung Soo; Lee, Woon-Seek; Koh, Shiegheun
2012-07-01
This article considers an inbound ordering and outbound dispatching problem for a single product in a third-party warehouse, where the demands are dynamic over a discrete and finite time horizon, and moreover, each demand has a time window in which it must be satisfied. Replenishing orders are shipped in containers and the freight cost is proportional to the number of containers used. The problem is classified into two cases, i.e. non-split demand case and split demand case, and a mathematical model for each case is presented. An in-depth analysis of the models shows that they are very complicated and difficult to find optimal solutions as the problem size becomes large. Therefore, genetic algorithm (GA) based heuristic approaches are designed to solve the problems in a reasonable time. To validate and evaluate the algorithms, finally, some computational experiments are conducted.
NASA Astrophysics Data System (ADS)
Stork, David G.; Furuichi, Yasuo
2011-03-01
David Hockney has argued that the right hand of the disciple, thrust to the rear in Caravaggio's Supper at Emmaus (1606), is anomalously large as a result of the artist refocusing a putative secret lens-based optical projector and tracing the image it projected onto his canvas. We show through rigorous optical analysis that to achieve such an anomalously large hand image, Caravaggio would have needed to make extremely large, conspicuous and implausible alterations to his studio setup, moving both his purported lens and his canvas nearly two meters between "exposing" the disciple's left hand and then his right hand. Such major disruptions to his studio would have impeded -not aided- Caravaggio in his work. Our optical analysis quantifies these problems and our computer graphics reconstruction of Caravaggio's studio illustrates these problems. In this way we conclude that Caravaggio did not use optical projections in the way claimed by Hockney, but instead most likely set the sizes of these hands "by eye" for artistic reasons.
Optimal city size and population density for the 21st century.
Speare A; White, M J
1990-10-01
The thesis that large scale urban areas result in greater efficiency, reduced costs, and a better quality of life is reexamined. The environmental and social costs are measured for different scales of settlement. The desirability and perceived problems of a particular place are examined in relation to size of place. The consequences of population decline are considered. New York city is described as providing both opportunities in employment, shopping, and cultural activities as well as a high cost of living, crime, and pollution. The historical development of large cities in the US is described. Immigration has contributed to a greater concentration of population than would have otherwise have occurred. The spatial proximity of goods and services argument (agglomeration economies) has changed with advancements in technology such as roads, trucking, and electronic communication. There is no optimal city size. The overall effect of agglomeration can be assessed by determining whether the markets for goods and labor are adequate to maximize well-being and balance the negative and positive aspects of urbanization. The environmental costs of cities increase with size when air quality, water quality, sewage treatment, and hazardous waste disposal is considered. Smaller scale and lower density cities have the advantages of a lower concentration of pollutants. Also, mobilization for program support is easier with homogenous population. Lower population growth in large cities would contribute to a higher quality of life, since large metropolitan areas have a concentration of immigrants, younger age distributions, and minority groups with higher than average birth rates. The negative consequences of decline can be avoided if reduction of population in large cities takes place gradually. For example, poorer quality housing can be removed for open space. Cities should, however, still attract all classes of people with opportunities equally available.
Geometric k-nearest neighbor estimation of entropy and mutual information
NASA Astrophysics Data System (ADS)
Lord, Warren M.; Sun, Jie; Bollt, Erik M.
2018-03-01
Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.
Fuchs, Lynn S; Seethaler, Pamela M; Powell, Sarah R; Fuchs, Douglas; Hamlett, Carol L; Fletcher, Jack M
2008-01-01
This study assessed the effects of preventative tutoring on the math problem solving of third-grade students with math and reading difficulties. Students (n = 35) were assigned randomly to continue in their general education math program or to receive secondary preventative tutoring 3 times per week, 30 min per session, for 12 weeks. Schema-broadening tutoring taught students to (a) focus on the mathematical structure of 3 problem types; (b) recognize problems as belonging to those 3 problem-type schemas; (c) solve the 3 word-problem types; and (d) transfer solution methods to problems that include irrelevant information, 2-digit operands, missing information in the first or second positions in the algebraic equation, or relevant information in charts, graphs, and pictures. Also, students were taught to perform the calculation and algebraic skills foundational for problem solving. Analyses of variance revealed statistically significant effects on a wide range of word problems, with large effect sizes. Findings support the efficacy of the tutoring protocol for preventing word-problem deficits among third-grade students with math and reading deficits.
Fuchs, Lynn S.; Seethaler, Pamela M.; Powell, Sarah R.; Fuchs, Douglas; Hamlett, Carol L.; Fletcher, Jack M.
2009-01-01
This study assessed the effects of preventative tutoring on the math problem solving of third-grade students with math and reading difficulties. Students (n = 35) were assigned randomly to continue in their general education math program or to receive secondary preventative tutoring 3 times per week, 30 min per session, for 12 weeks. Schema-broadening tutoring taught students to (a) focus on the mathematical structure of 3 problem types; (b) recognize problems as belonging to those 3 problem-type schemas; (c) solve the 3 word-problem types; and (d) transfer solution methods to problems that include irrelevant information, 2-digit operands, missing information in the first or second positions in the algebraic equation, or relevant information in charts, graphs, and pictures. Also, students were taught to perform the calculation and algebraic skills foundational for problem solving. Analyses of variance revealed statistically significant effects on a wide range of word problems, with large effect sizes. Findings support the efficacy of the tutoring protocol for preventing word-problem deficits among third-grade students with math and reading deficits. PMID:20209074
Methodological Issues Related to the Use of P Less than 0.05 in Health Behavior Research
ERIC Educational Resources Information Center
Duryea, Elias; Graner, Stephen P.; Becker, Jeremy
2009-01-01
This paper reviews methodological issues related to the use of P less than 0.05 in health behavior research and suggests how application and presentation of statistical significance may be improved. Assessment of sample size and P less than 0.05, the file drawer problem, the Law of Large Numbers and the statistical significance arguments in…
ERIC Educational Resources Information Center
Coyle, Shawn; Jones, Thea; Pickle, Shirley Kirk
2009-01-01
This article presents a sample of online learning programs serving very different populations: a small district spread over a vast area, a large inner school district, and a statewide program serving numerous districts. It describes how these districts successfully implemented e-learning programs in their schools and discusses the positive impact…
Folding and unfolding of large-size shell construction for application in Earth orbit
NASA Astrophysics Data System (ADS)
Kondyurin, Alexey; Pestrenina, Irena; Pestrenin, Valery; Rusakov, Sergey
2016-07-01
A future exploration of space requires a technology of large module for biological, technological, logistic and other applications in Earth orbits [1-3]. This report describes the possibility of using large-sized shell structures deployable in space. Structure is delivered to the orbit in the spaceship container. The shell is folded for the transportation. The shell material is either rigid plastic or multilayer prepreg comprising rigid reinforcements (such as reinforcing fibers). The unfolding process (bringing a construction to the unfolded state by loading the internal pressure) needs be considered at the presence of both stretching and bending deformations. An analysis of the deployment conditions (the minimum internal pressure bringing a construction from the folded state to the unfolded state) of large laminated CFRP shell structures is formulated in this report. Solution of this mechanics of deformable solids (MDS) problem of the shell structure is based on the following assumptions: the shell is made of components whose median surface has a reamer; in the separate structural element relaxed state (not stressed and not deformed) its median surface coincides with its reamer (this assumption allows choose the relaxed state of the structure correctly); structural elements are joined (sewn together) by a seam that does not resist rotation around the tangent to the seam line. The ways of large shell structures folding, whose median surface has a reamer, are suggested. Unfolding of cylindrical, conical (full and truncated cones), and large-size composite shells (cylinder-cones, cones-cones) is considered. These results show that the unfolding pressure of such large-size structures (0.01-0.2 atm.) is comparable to the deploying pressure of pneumatic parts (0.001-0.1 atm.) [3]. It would be possible to extend this approach to investigate the unfolding process of large-sized shells with ruled median surface or for non-developable surfaces. This research was financially supported by the Russian Fund for Basic Research (grants No. 15-01-07946_a and 14-08-96011_r_ural_a). 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A., Pestrenina I., Pestrenin V., Kashin N., Naymushin A. Large-size deployable construction heated by solar irradiation free space, 40th COSPAR Scientific Assembly 2014. 3. V. M. Pestrenin, I. V. Pestrenina, S. V. Rusakov, and A. V. Kondyurin Deployment of large-size shell constructions by internal pressure, Mechanics of Composite Materials, 2015, Vol. 51, No 5, p. 629-636.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Ion figuring of large prototype mirror segments for the E-ELT
NASA Astrophysics Data System (ADS)
Ghigo, M.; Vecchi, G.; Basso, S.; Citterio, O.; Civitani, M.; Mattaini, E.; Pareschi, G.; Sironi, G.
2014-07-01
At INAF-Astronomical Observatory of Brera a study is under way to explore the problems related to the ion beam figuring of full scale Zerodur hexagonal mirrors of M1 for the European Extremely Large Telescope (E-ELT), having size of 1.4 m corner to corner. During this study it is initially foreseen the figuring of a scaled down version mirror of the same material having size of 1 m corner to corner to assess the relevant figuring problems and issues. This specific mirror has a radius of curvature of 3 m, which allows for easy interferometric measurement. A mechanical support was designed to minimize its deformations due to the gravity. The Ion Beam Figuring Facility used for this study has been recently completed in the Brera Observatory and has a figuring area of 140 cm x 170 cm. It employs a Kaufman ion source having 50 mm grids mounted on three axis. This system has been designed and developed to be autonomous and self-monitoring during the figuring process. The software and the mathematical tools used to compute the dwell time solution have been developed at INAF-OAB as well. Aim of this study is the estimation and optimization of the time requested to correct the surface adopting strategies to mitigate the well-known thermal problems related to the Zerodur material. In this paper, the results obtained figuring the 1 m corner-to-corner test segment are reported.
Performance comparison analysis library communication cluster system using merge sort
NASA Astrophysics Data System (ADS)
Wulandari, D. A. R.; Ramadhan, M. E.
2018-04-01
Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
Melby-Lervåg, Monica; Lervåg, Arne
2014-03-01
We report a systematic meta-analytic review of studies comparing reading comprehension and its underlying components (language comprehension, decoding, and phonological awareness) in first- and second-language learners. The review included 82 studies, and 576 effect sizes were calculated for reading comprehension and underlying components. Key findings were that, compared to first-language learners, second-language learners display a medium-sized deficit in reading comprehension (pooled effect size d = -0.62), a large deficit in language comprehension (pooled effect size d = -1.12), but only small differences in phonological awareness (pooled effect size d = -0.08) and decoding (pooled effect size d = -0.12). A moderator analysis showed that characteristics related to the type of reading comprehension test reliably explained the variation in the differences in reading comprehension between first- and second-language learners. For language comprehension, studies of samples from low socioeconomic backgrounds and samples where only the first language was used at home generated the largest group differences in favor of first-language learners. Test characteristics and study origin reliably contributed to the variations between the studies of language comprehension. For decoding, Canadian studies showed group differences in favor of second-language learners, whereas the opposite was the case for U.S. studies. Regarding implications, unless specific decoding problems are detected, interventions that aim to ameliorate reading comprehension problems among second-language learners should focus on language comprehension skills.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
O'Farrell, Timothy J; Schumm, Jeremiah A; Murphy, Marie M; Muchowski, Patrice M
2017-04-01
Behavioral couples therapy (BCT) is more efficacious than individually-based therapy (IBT) for substance and relationship outcomes among substance use disorder patients. This study compared BCT with IBT for drug-abusing women. Sixty-one women, mostly White, late 30s, with primary substance use disorder other than alcohol (74% opioid), and male partners were randomized to 26 sessions over 13 weeks of BCT plus 12-step-oriented IBT (i.e., BCT + IBT) or IBT. Substance-related outcomes were percentage days abstinent (PDA), percentage days drug use (PDDU), Inventory of Drug Use Consequences. Relationship outcomes were Dyadic Adjustment Scale (DAS), days separated. Data were collected at baseline, posttreatment, and quarterly for 1-year follow-up. On PDA, PDDU, and substance-related problems, both BCT + IBT and IBT patients showed significant (p < .01) large effect size improvements throughout 1-year follow-up (d > .8 for most time periods). BCT + IBT showed a significant (p < .001) large effect size (d = -.85) advantage versus IBT on fewer substance-related problems, while BCT + IBT and IBT did not differ on PDA or PDDU (ps > .47). On relationship outcomes, compared to IBT, BCT + IBT had significantly higher male-reported Dyadic Adjustment Scale (p < .001, d = .57) and fewer days separated (p = .01, d = -.47) throughout 1-year follow-up. BCT + IBT for drug-abusing women was more efficacious than IBT in improving relationship satisfaction and preventing relationship breakup. On substance use and substance-related problems, women receiving both treatments substantially improved, and women receiving BCT + IBT had fewer substance-related problems than IBT. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
O’Farrell, Timothy J.; Schumm, Jeremiah A.; Murphy, Marie M.; Muchowski, Patrice M.
2017-01-01
Objective Behavioral couples therapy (BCT) is more efficacious than individually-based therapy (IBT) for substance and relationship outcomes among patients with substance use disorder (SUD). This study compared BCT with IBT for drug-abusing women. Method Sixty-one women, mostly White, late thirties, with primary SUD other than alcohol (74% opioid diagnosis), and male partners were randomized to 26 sessions over 13-weeks of BCT plus 12-step-oriented IBT (i.e. BCT+IBT) or IBT. Substance-related outcomes: percentage days abstinent (PDA), percentage days drug use (PDDU), Inventory of Drug Use Consequences. Relationship outcomes: Dyadic Adjustment Scale (DAS) and days separated. Data were collected at baseline, post-treatment, and quarterly for 1-yr follow-up. Results On PDA, PDDU, and substance-related problems, both BCT+IBT and IBT patients showed significant (p < .01) large effect size improvements throughout the 1-yr follow-up (d > .8 for most time periods). BCT+IBT showed a significant (p < .001) large effect size (d = −.85) advantage versus IBT on fewer substance-related problems, while BCT+IBT and IBT did not differ on PDA or PDDU (p’s > .47). On relationship outcomes, compared to IBT, BCT+IBT had significantly higher male-reported DAS (p < .001, d = .57) and fewer days separated (p = .01, d = −.47) throughout the 1-yr follow-up. Conclusion BCT+IBT for drug-abusing women was more efficacious than IBT in improving relationship satisfaction and preventing relationship break-up. On substance use and substance-related problems, women receiving both treatments substantially improved, and women receiving BCT+IBT had fewer substance-related problems than IBT. PMID:28333533
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.
Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal
2010-11-15
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
NASA Astrophysics Data System (ADS)
Cho, Yi Je; Lee, Wook Jin; Park, Yong Ho
2014-11-01
Aspects of numerical results from computational experiments on representative volume element (RVE) problems using finite element analyses are discussed. Two different boundary conditions (BCs) are examined and compared numerically for volume elements with different sizes, where tests have been performed on the uniaxial tensile deformation of random particle reinforced composites. Structural heterogeneities near model boundaries such as the free-edges of particle/matrix interfaces significantly influenced the overall numerical solutions, producing force and displacement fluctuations along the boundaries. Interestingly, this effect was shown to be limited to surface regions within a certain distance of the boundaries, while the interior of the model showed almost identical strain fields regardless of the applied BCs. Also, the thickness of the BC-affected regions remained constant with varying volume element sizes in the models. When the volume element size was large enough compared to the thickness of the BC-affected regions, the structural response of most of the model was found to be almost independent of the applied BC such that the apparent properties converged to the effective properties. Finally, the mechanism that leads a RVE model for random heterogeneous materials to be representative is discussed in terms of the size of the volume element and the thickness of the BC-affected region.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Highwood, Eleanor J; Kinnersley, Robert P
2006-05-01
With both climate change and air quality on political and social agendas from local to global scale, the links between these hitherto separate fields are becoming more apparent. Black carbon, largely from combustion processes, scatters and absorbs incoming solar radiation, contributes to poor air quality and induces respiratory and cardiovascular problems. Uncertainties in the amount, location, size and shape of atmospheric black carbon cause large uncertainty in both climate change estimates and toxicology studies alike. Increased research has led to new effects and areas of uncertainty being uncovered. Here we draw together recent results and explore the increasing opportunities for synergistic research that will lead to improved confidence in the impact of black carbon on climate change, air quality and human health. Topics of mutual interest include better information on spatial distribution, size, mixing state and measuring and monitoring.
Innovative Double Bypass Engine for Increased Performance
NASA Astrophysics Data System (ADS)
Manoharan, Sanjivan
Engines continue to grow in size to meet the current thrust requirements of the civil aerospace industry. Large engines pose significant transportation problems and require them to be split in order to be shipped. Thus, large amounts of time have been spent in researching methods to increase thrust capabilities while maintaining a reasonable engine size. Unfortunately, much of this research has been focused on increasing the performance and efficiencies of individual components while limited research has been done on innovative engine configurations. This thesis focuses on an innovative engine configuration, the High Double Bypass Engine, aimed at increasing fuel efficiency and thrust while maintaining a competitive fan diameter and engine length. The 1-D analysis was done in Excel and then compared to the results from Numerical Propulsion Simulation System (NPSS) software and were found to be within 4% error. Flow performance characteristics were also determined and validated against their criteria.
Investigation of a hydrostatic azimuth thrust bearing for a large steerable antenna
NASA Technical Reports Server (NTRS)
Rumbarger, J.; Castelli, V.; Rippel, H.
1972-01-01
The problems inherent in the design and construction of a hydrostatic azimuth thrust bearing for a tracking antenna of very large size were studied. For a load of 48,000,000 lbs., it is concluded that the hydrostatic bearing concept is feasible, provided that a particular multiple pad arrangement, high oil viscosity, and a particular load spreading arrangement are used. Presently available computer programs and techniques are deemed to be adequate for a good portion of the design job but new integrated programs will have to be developed in the area of the computation of the deflections of the supporting bearing structure. Experimental studies might also be indicated to ascertain the life characteristics of grouting under cyclic loading, and the optimization of hydraulic circuits and pipe sizes to insure the long life operation of pumps with high viscosity oil while avoiding cavitation.
Ioannidis, Vassilios; van Nimwegen, Erik; Stockinger, Heinz
2016-01-01
ISMARA ( ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client. PMID:28232860
Efficient Planning of Wind-Optimal Routes in North Atlantic Oceanic Airspace
NASA Technical Reports Server (NTRS)
Rodionova, Olga; Sridhar, Banavar
2017-01-01
The North Atlantic oceanic airspace (NAT) is crossed daily by more than a thousand flights, which are greatly affected by strong jet stream air currents. Several studies devoted to generating wind-optimal (WO) aircraft trajectories in the NAT demonstrated great efficiency of such an approach for individual flights. However, because of the large separation norms imposed in the NAT, previously proposed WO trajectories induce a large number of potential conflicts. Much work has been done on strategic conflict detection and resolution (CDR) in the NAT. The work presented here extends previous methods and attempts to take advantage of the NAT traffic structure to simplify the problem and improve the results of CDR. Four approaches are studied in this work: 1) subdividing the existing CDR problem into sub-problems of smaller sizes, which are easier to handle; 2) more efficient data reorganization within the considered time period; 3) problem localization, i.e. concentrating the resolution effort in the most conflicted regions; 4) applying CDR to the pre-tactical decision horizon (a couple of hours in advance). Obtained results show that these methods efficiently resolve potential conflicts at the strategic and pre-tactical levels by keeping the resulting trajectories close to the initial WO ones.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
NASA Astrophysics Data System (ADS)
Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.
2017-11-01
This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.
Using JWST Heritage to Enable a Future Large Ultra-Violet Optical Infrared Telescope
NASA Technical Reports Server (NTRS)
Feinberg, Lee
2016-01-01
To the extent it makes sense, leverage JWST knowledge, designs, architectures, GSE. Develop a scalable design reference mission (9.2 meter). Do just enough work to understand launch break points in aperture size. Demonstrate 10 pm stability is achievable on a design reference mission. Make design compatible with starshades. While segmented coronagraphs with high throughput and large bandpasses are important, make the system serviceable so you can evolve the instruments. Keep it room temperature to minimize the costs associated with cryo. Focus resources on the contrast problem. Start with the architecture and connect it to the technology needs.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
NASA Astrophysics Data System (ADS)
Andrawis, Alfred S.
1994-10-01
The problem addressed by this report is the large size and heavy weight of the cable bundle, used for controlling a Multidegree-Of-Freedom Serpentine Truss Manipulator arm, which imposes limitations on the manipulator arm maneuverability. This report covers a design of an optical fiber network to replace the existing copper wire network of the Serpentine Truss Manipulator. This report proposes a fiber network design which significantly reduces the bundle size into two phases. The first phase does not require any modifications for the manipulator architecture, while the other requires major modifications. Design philosophy, hardware details and schematic diagrams are presented.
NASA Technical Reports Server (NTRS)
Andrawis, Alfred S.
1994-01-01
The problem addressed by this report is the large size and heavy weight of the cable bundle, used for controlling a Multidegree-Of-Freedom Serpentine Truss Manipulator arm, which imposes limitations on the manipulator arm maneuverability. This report covers a design of an optical fiber network to replace the existing copper wire network of the Serpentine Truss Manipulator. This report proposes a fiber network design which significantly reduces the bundle size into two phases. The first phase does not require any modifications for the manipulator architecture, while the other requires major modifications. Design philosophy, hardware details and schematic diagrams are presented.
Density of transneptunian object 229762 2007 UK126
NASA Astrophysics Data System (ADS)
Grundy, Will
2017-08-01
Densities provide unique information about bulk composition and interior structure and are key to going beyond the skin-deep view offered by remote-sensing techniques based on photometry, spectroscopy, and polarimetry. They are known for a handful of the relict planetesimals that populate our Solar System's Kuiper belt, revealing intriguing differences between small and large bodies. More and better quality data are needed to address fundamental questions about how planetesimals form from nebular solids, and how distinct materials are distributed through the nebula. Masses from binary orbits are generally quite precise, but a problem afflicting many of the known densities is that they depend on size estimates from thermal emission observations, with large model-dependent uncertainties that dominate the error bars on density estimates. Stellar occultations can provide much more accurate sizes and thus densities, but they depend on fortuitous geometry and thus can only be done for a few particularly valuable binaries. We propose observations of a system where an accurate density can be determined: 229762 2007 UK126. An accurate size is already available from multiple stellar occultation chords. This proposal will determine the mass, and thus the density.
Dangerous Near-Earth Asteroids and Meteorites
NASA Astrophysics Data System (ADS)
Mickaelian, A. M.; Grigoryan, A. E.
2015-07-01
The problem of Near-Earth Objects (NEOs; Astreoids and Meteorites) is discussed. To have an understanding on the probablity of encounters with such objects, one may use two different approaches: 1) historical, based on the statistics of existing large meteorite craters on the Earth, estimation of the source meteorites size and the age of these craters to derive the frequency of encounters with a given size of meteorites and 2) astronomical, based on the study and cataloging of all medium-size and large bodies in the Earth's neighbourhood and their orbits to estimate the probability, angles and other parameters of encounters. Therefore, we discuss both aspects and give our present knowledge on both phenomena. Though dangerous NEOs are one of the main source for cosmic catastrophes, we also focus on other possible dangers, such as even slight changes of Solar irradiance or Earth's orbit, change of Moon's impact on Earth, Solar flares or other manifestations of Solar activity, transit of comets (with impact on Earth's atmosphere), global climate change, dilution of Earth's atmosphere, damage of ozone layer, explosion of nearby Supernovae, and even an attack by extraterrestrial intelligence.
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving.
Semeniuk, Yulia Yuriyivna; Brown, Roger L; Riesch, Susan K
2016-07-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. © The Author(s) 2016.
An overview of the genetic dissection of complex traits.
Rao, D C
2008-01-01
Thanks to the recent revolutionary genomic advances such as the International HapMap consortium, resolution of the genetic architecture of common complex traits is beginning to look hopeful. While demonstrating the feasibility of genome-wide association (GWA) studies, the pathbreaking Wellcome Trust Case Control Consortium (WTCCC) study also serves to underscore the critical importance of very large sample sizes and draws attention to potential problems, which need to be addressed as part of the study design. Even the large WTCCC study had vastly inadequate power for several of the associations reported (and confirmed) and, therefore, most of the regions harboring relevant associations may not be identified anytime soon. This chapter provides an overview of some of the key developments in the methodological approaches to genetic dissection of common complex traits. Constrained Bayesian networks are suggested as especially useful for analysis of pathway-based SNPs. Likewise, composite likelihood is suggested as a promising method for modeling complex systems. It discusses the key steps in a study design, with an emphasis on GWA studies. Potential limitations highlighted by the WTCCC GWA study are discussed, including problems associated with massive genotype imputation, analysis of pooled national samples, shared controls, and the critical role of interactions. GWA studies clearly need massive sample sizes that are only possible through genuine collaborations. After all, for common complex traits, the question is not whether we can find some pieces of the puzzle, but how large and what kind of a sample we need to (nearly) solve the genetic puzzle.
Constraining ejecta particle size distributions with light scattering
NASA Astrophysics Data System (ADS)
Schauer, Martin; Buttler, William; Frayer, Daniel; Grover, Michael; Lalone, Brandon; Monfared, Shabnam; Sorenson, Daniel; Stevens, Gerald; Turley, William
2017-06-01
The angular distribution of the intensity of light scattered from a particle is strongly dependent on the particle size and can be calculated using the Mie solution to Maxwell's equations. For a collection of particles with a range of sizes, the angular intensity distribution will be the sum of the contributions from each particle size weighted by the number of particles in that size bin. The set of equations describing this pattern is not uniquely invertible, i.e. a number of different distributions can lead to the same scattering pattern, but with reasonable assumptions about the distribution it is possible to constrain the problem and extract estimates of the particle sizes from a measured scattering pattern. We report here on experiments using particles ejected by shockwaves incident on strips of triangular perturbations machined into the surface of tin targets. These measurements indicate a bimodal distribution of ejected particle sizes with relatively large particles (median radius 2-4 μm) evolved from the edges of the perturbation strip and smaller particles (median radius 200-600 nm) from the perturbations. We will briefly discuss the implications of these results and outline future plans.
Adhapure, N N; Dhakephalkar, P K; Dhakephalkar, A P; Tembhurkar, V R; Rajgure, A V; Deshmukh, A M
2014-01-01
Very recently bioleaching has been used for removing metals from electronic waste. Most of the research has been targeted to using pulverized PCBs for bioleaching where precipitate formed during bioleaching contaminates the pulverized PCB sample and making the overall metal recovery process more complicated. In addition to that, such mixing of pulverized sample with precipitate also creates problems for the final separation of non metallic fraction of PCB sample. In the present investigation we attempted the use of large pieces of printed circuit boards instead of pulverized sample for removal of metals. Use of large pieces of PCBs for bioleaching was restricted due to the chemical coating present on PCBs, the problem has been solved by chemical treatment of PCBs prior to bioleaching. In short,•Large pieces of PCB can be used for bioleaching instead of pulverized PCB sample.•Metallic portion on PCBs can be made accessible to bacteria with prior chemical treatment of PCBs.•Complete metal removal obtained on PCB pieces of size 4 cm × 2.5 cm with the exception of solder traces. The final metal free PCBs (non metallic) can be easily recycled and in this way the overall recycling process (metallic and non metallic part) of PCBs becomes simple.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Impact of ageing on problem size and proactive interference in arithmetic facts solving.
Archambeau, Kim; De Visscher, Alice; Noël, Marie-Pascale; Gevers, Wim
2018-02-01
Arithmetic facts (AFs) are required when solving problems such as "3 × 4" and refer to calculations for which the correct answer is retrieved from memory. Currently, two important effects that modulate the performance in AFs have been highlighted: the problem size effect and the proactive interference effect. The aim of this study is to investigate possible age-related changes of the problem size effect and the proactive interference effect in AF solving. To this end, the performance of young and older adults was compared in a multiplication production task. Furthermore, an independent measure of proactive interference was assessed to further define the architecture underlying this effect in multiplication solving. The results indicate that both young and older adults were sensitive to the effects of interference and of the problem size. That is, both interference and problem size affected performance negatively: the time needed to solve a multiplication problem increases as the level of interference and the size of the problem increase. Regarding the effect of ageing, the problem size effect remains constant with age, indicating a preserved AF network in older adults. Interestingly, sensitivity to proactive interference in multiplication solving was less pronounced in older than in younger adults suggesting that part of the proactive interference has been overcome with age.
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
NASA Astrophysics Data System (ADS)
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2016-01-01
Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.
Gas-liquid nucleation at large metastability: unusual features and a new formalism
NASA Astrophysics Data System (ADS)
Santra, Mantu; Singh, Rakesh S.; Bagchi, Biman
2011-03-01
Nucleation at large metastability is still largely an unsolved problem, even though it is a problem of tremendous current interest, with wide-ranging practical value, from atmospheric research to materials science. It is now well accepted that the classical nucleation theory (CNT) fails to provide a qualitative picture and gives incorrect quantitative values for such quantities as activation-free energy barrier and supersaturation dependence of nucleation rate, especially at large metastability. In this paper, we present an alternative formalism to treat nucleation at large supersaturation by introducing an extended set of order parameters in terms of the kth largest liquid-like clusters, where k = 1 is the largest cluster in the system, k = 2 is the second largest cluster and so on. At low supersaturation, the size of the largest liquid-like cluster acts as a suitable order parameter. At large supersaturation, the free energy barrier for the largest liquid-like cluster disappears. We identify this supersaturation as the one at the onset of kinetic spinodal. The kinetic spinodal is system-size-dependent. Beyond kinetic spinodal many clusters grow simultaneously and competitively and hence the nucleation and growth become collective. In order to describe collective growth, we need to consider the full set of order parameters. We derive an analytic expression for the free energy of formation of the kth largest cluster. The expression predicts that, at large metastability (beyond kinetic spinodal), the barrier of growth for several largest liquid-like clusters disappears, and all these clusters grow simultaneously. The approach to the critical size occurs by barrierless diffusion in the cluster size space. The expression for the rate of barrier crossing predicts weaker supersaturation dependence than what is predicted by CNT at large metastability. Such a crossover behavior has indeed been observed in recent experiments (but eluded an explanation till now). In order to understand the large numerical discrepancy between simulation predictions and experimental results, we carried out a study of the dependence on the range of intermolecular interactions of both the surface tension of an equilibrium planar gas-liquid interface and the free energy barrier of nucleation. Both are found to depend significantly on the range of interaction for the Lennard-Jones potential, both in two and three dimensions. The value of surface tension and also the free energy difference between the gas and the liquid phase increase significantly and converge only when the range of interaction is extended beyond 6-7 molecular diameters. We find, with the full range of interaction potential, that the surface tension shows only a weak dependence on supersaturation, so the reason for the breakdown of CNT (with simulated values of surface tension and free energy gap) cannot be attributed to the supersaturation dependence of surface tension. This remains an unsettled issue at present because of the use of the value of surface tension obtained at coexistence.
Spacecraft Dynamics and Control Program at AFRPL
NASA Technical Reports Server (NTRS)
Das, A.; Slimak, L. K. S.; Schloegel, W. T.
1986-01-01
A number of future DOD and NASA spacecraft such as the space based radar will be not only an order of magnitude larger in dimension than the current spacecraft, but will exhibit extreme structural flexibility with very low structural vibration frequencies. Another class of spacecraft (such as the space defense platforms) will combine large physical size with extremely precise pointing requirement. Such problems require a total departure from the traditional methods of modeling and control system design of spacecraft where structural flexibility is treated as a secondary effect. With these problems in mind, the Air Force Rocket Propulsion Laboratory (AFRPL) initiated research to develop dynamics and control technology so as to enable the future large space structures (LSS). AFRPL's effort in this area can be subdivided into the following three overlapping areas: (1) ground experiments, (2) spacecraft modeling and control, and (3) sensors and actuators. Both the in-house and contractual efforts of the AFRPL in LSS are summarized.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
Abdulmalik, Jibril; Ani, Cornelius; Ajuwon, Ademola J; Omigbodun, Olayinka
2016-01-01
Aggressive patterns of behavior often start early in childhood, and tend to remain stable into adulthood. The negative consequences include poor academic performance, disciplinary problems and encounters with the juvenile justice system. Early school intervention programs can alter this trajectory for aggressive children. However, there are no studies evaluating the feasibility of such interventions in Africa. This study therefore, assessed the effect of group-based problem-solving interventions on aggressive behaviors among primary school pupils in Ibadan, Nigeria. This was an intervention study with treatment and wait-list control groups. Two public primary schools in Ibadan Nigeria were randomly allocated to an intervention group and a waiting list control group. Teachers rated male Primary five pupils in the two schools on aggressive behaviors and the top 20 highest scorers in each school were selected. Pupils in the intervention school received 6 twice-weekly sessions of group-based intervention, which included problem-solving skills, calming techniques and attribution retraining. Outcome measures were; teacher rated aggressive behaviour (TRAB), self-rated aggression scale (SRAS), strengths and difficulties questionnaire (SDQ), attitude towards aggression questionnaire (ATAQ), and social cognition and attribution scale (SCAS). The participants were aged 12 years (SD = 1.2, range 9-14 years). Both groups had similar socio-demographic backgrounds and baseline measures of aggressive behaviors. Controlling for baseline scores, the intervention group had significantly lower scores on TRAB and SRAS 1-week post intervention with large Cohen's effect sizes of 1.2 and 0.9 respectively. The other outcome measures were not significantly different between the groups post-intervention. Group-based problem solving intervention for aggressive behaviors among primary school students showed significant reductions in both teachers' and students' rated aggressive behaviours with large effect sizes. However, this was a small exploratory trial whose findings may not be generalizable, but it demonstrates that psychological interventions for children with high levels of aggressive behaviour are feasible and potentially effective in Nigeria.
Concerted control of Escherichia coli cell division
Osella, Matteo; Nugent, Eileen; Cosentino Lagomarsino, Marco
2014-01-01
The coordination of cell growth and division is a long-standing problem in biology. Focusing on Escherichia coli in steady growth, we quantify cell division control using a stochastic model, by inferring the division rate as a function of the observable parameters from large empirical datasets of dividing cells. We find that (i) cells have mechanisms to control their size, (ii) size control is effected by changes in the doubling time, rather than in the single-cell elongation rate, (iii) the division rate increases steeply with cell size for small cells, and saturates for larger cells. Importantly, (iv) the current size is not the only variable controlling cell division, but the time spent in the cell cycle appears to play a role, and (v) common tests of cell size control may fail when such concerted control is in place. Our analysis illustrates the mechanisms of cell division control in E. coli. The phenomenological framework presented is sufficiently general to be widely applicable and opens the way for rigorous tests of molecular cell-cycle models. PMID:24550446
How number line estimation skills relate to neural activations in single digit subtraction problems
Berteletti, I.; Man, G.; Booth, J.R.
2014-01-01
The Number Line (NL) task requires judging the relative numerical magnitude of a number and estimating its value spatially on a continuous line. Children's skill on this task has been shown to correlate with and predict future mathematical competence. Neurofunctionally, this task has been shown to rely on brain regions involved in numerical processing. However, there is no direct evidence that performance on the NL task is related to brain areas recruited during arithmetical processing and that these areas are domain-specific to numerical processing. In this study, we test whether 8- to 14-year-old's behavioral performance on the NL task is related to fMRI activation during small and large single-digit subtraction problems. Domain-specific areas for numerical processing were independently localized through a numerosity judgment task. Results show a direct relation between NL estimation performance and the amount of the activation in key areas for arithmetical processing. Better NL estimators showed a larger problem size effect than poorer NL estimators in numerical magnitude (i.e., intraparietal sulcus) and visuospatial areas (i.e., posterior superior parietal lobules), marked by less activation for small problems. In addition, the direction of the activation with problem size within the IPS was associated to differences in accuracies for small subtraction problems. This study is the first to show that performance in the NL task, i.e. estimating the spatial position of a number on an interval, correlates with brain activity observed during single-digit subtraction problem in regions thought to be involved numerical magnitude and spatial processes. PMID:25497398
GPU accelerated fuzzy connected image segmentation by using CUDA.
Zhuge, Ying; Cao, Yong; Miller, Robert W
2009-01-01
Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Large fully retractable telescope enclosures still closable in strong wind
NASA Astrophysics Data System (ADS)
Bettonvil, Felix C. M.; Hammerschlag, Robert H.; Jägers, Aswin P. L.; Sliepen, Guus
2008-07-01
Two prototypes of fully retractable enclosures with diameters of 7 and 9 m have been built for the high-resolution solar telescopes DOT (Dutch Open Telescope) and GREGOR, both located at the Canary Islands. These enclosures protect the instruments for bad weather and are fully open when the telescopes are in operation. The telescopes and enclosures also operate in hard wind. The prototypes are based on tensioned membrane between movable but stiff bows, which fold together to a ring when opened. The height of the ring is small. The prototypes already survived several storms, with often snow and ice, without any damage, including hurricane Delta with wind speeds up to 68 m/s. The enclosures can still be closed and opened with wind speeds of 20 m/s without any problems or restrictions. The DOT successfully demonstrated the open, wind-flushing concept for astronomical telescopes. It is now widely recognized that also large future telescopes benefit from wind-flushing and retractable enclosures. These telescopes require enclosures with diameters of 30 m until roughly 100 m, the largest sizes for the ELTs (Extreme Large Telescopes), which will be built in the near future. We discuss developments and required technology for the realization of these large sizes.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
Unequal-area, fixed-shape facility layout problems using the firefly algorithm
NASA Astrophysics Data System (ADS)
Ingole, Supriya; Singh, Dinesh
2017-07-01
In manufacturing industries, the facility layout design is a very important task, as it is concerned with the overall manufacturing cost and profit of the industry. The facility layout problem (FLP) is solved by arranging the departments or facilities of known dimensions on the available floor space. The objective of this article is to implement the firefly algorithm (FA) for solving unequal-area, fixed-shape FLPs and optimizing the costs of total material handling and transportation between the facilities. The FA is a nature-inspired algorithm and can be used for combinatorial optimization problems. Benchmark problems from the previous literature are solved using the FA. To check its effectiveness, it is implemented to solve large-sized FLPs. Computational results obtained using the FA show that the algorithm is less time consuming and the total layout costs for FLPs are better than the best results achieved so far.
Fault tolerance of artificial neural networks with applications in critical systems
NASA Technical Reports Server (NTRS)
Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.
1992-01-01
This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Mandrà, Salvatore; Katzgraber, Helmut
We propose a patch planting heuristic that allows us to create arbitrarily-large Ising spin-glass instances on any topology and with any type of disorder, and where the exact ground-state energy of the problem is known by construction. By breaking up the problem into patches that can be treated either with exact or heuristic solvers, we can reconstruct the optimum of the original, considerably larger, problem. The scaling of the computational complexity of these instances with various patch numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and quantum annealing on the D-Wave 2X quantum annealer. The method can be useful for benchmarking of novel computing technologies and algorithms. NSF-DMR-1208046 and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J
2011-06-01
Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Experiments in structural dynamics and control using a grid
NASA Technical Reports Server (NTRS)
Montgomery, R. C.
1985-01-01
Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.
Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong
2016-06-01
Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...
2015-06-08
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
Carr, Alan; Hartnett, Dan; Brosnan, Eileen; Sharry, John
2017-09-01
Parents Plus (PP) programs are systemic, solution-focused, group-based interventions. They are designed for delivery in clinical and community settings as treatment programs for families with child-focused problems, such as behavioral difficulties, disruptive behavior disorders, and emotional disorders in young people with and without developmental disabilities. PP programs have been developed for families of preschoolers, preadolescent children, and teenagers, as well as for separated or divorced families. Seventeen evaluation studies involving over 1,000 families have shown that PP programs have a significant impact on child behavior problems, goal attainment, and parental satisfaction and stress. The effect size of 0.57 (p < .001) from a meta-analysis of 10 controlled studies for child behavior problems compares favorably with those of meta-analyses of other well-established parent training programs with large evidence bases. In controlled studies, PP programs yielded significant (p < .001) effect sizes for goal attainment (d = 1.51), parental satisfaction (d = 0.78), and parental stress reduction (d = 0.54). PP programs may be facilitated by trained front-line mental health and educational professionals. © 2016 Family Process Institute.
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357
Ground-water flow in low permeability environments
Neuzil, Christopher E.
1986-01-01
Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.
Meta-heuristic algorithm to solve two-sided assembly line balancing problems
NASA Astrophysics Data System (ADS)
Wirawan, A. D.; Maruf, A.
2016-02-01
Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.
Compression-RSA technique: A more efficient encryption-decryption procedure
NASA Astrophysics Data System (ADS)
Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah
2014-06-01
The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
NASA Astrophysics Data System (ADS)
Ngamroo, Issarachai
2010-12-01
It is well known that the superconducting magnetic energy storage (SMES) is able to quickly exchange active and reactive power with the power system. The SMES is expected to be the smart storage device for power system stabilization. Although the stabilizing effect of SMES is significant, the SMES is quite costly. Particularly, the superconducting magnetic coil size which is the essence of the SMES, must be carefully selected. On the other hand, various generation and load changes, unpredictable network structure, etc., cause system uncertainties. The power controller of SMES which is designed without considering such uncertainties, may not tolerate and loses stabilizing effect. To overcome these problems, this paper proposes the new design of robust SMES controller taking coil size and system uncertainties into account. The structure of the active and reactive power controllers is the 1st-order lead-lag compensator. No need for the exact mathematical representation, system uncertainties are modeled by the inverse input multiplicative perturbation. Without the difficulty of the trade-off of damping performance and robustness, the optimization problem of control parameters is formulated. The particle swarm optimization is used for solving the optimal parameters at each coil size automatically. Based on the normalized integral square error index and the consideration of coil current constraint, the robust SMES with the smallest coil size which still provides the satisfactory stabilizing effect, can be achieved. Simulation studies in the two-area four-machine interconnected power system show the superior robustness of the proposed robust SMES with the smallest coil size under various operating conditions over the non-robust SMES with large coil size.
1991-11-19
grew 253 percent, net assets grew 87 vigorous debates among economists a few years ago, has percent, fixed assets grew 155 percent, and average been...although enterprises. they only account for 2.7 percent of all industrial enter- prises, they possess two-thirds of all fixed assess, account If we are to...large- ther fiscal problems are handled on an ad-hoc basis. A and medium-sized enterprises do not appear strong fixed base number in contracts sets taxes
Unmanned Aerial Vehicle Operational Test and Evaluation Lessons Learned
2003-12-01
prevented during the test design phase. Test designers should ensure that the appropriate data can be collected in sample sizes large enough to support...encountered during previous tests in an attempt to prevent them from occurring in future tests. The focus of this paper is on UAVs acquired to perform...CHAPTER III TEST DESIGN III. TEST DESIGN Many of the problems encountered during UAV OT could have been prevented during the test
ERIC Educational Resources Information Center
Tatner, Mary; Tierney, Anne
2016-01-01
The development and evaluation of a two-week laboratory class, based on the diagnosis of human infectious diseases, is described. It can be easily scaled up or down, to suit class sizes from 50 to 600 and completed in a shorter time scale, and to different audiences as desired. Students employ a range of techniques to solve a real-life and…
A multi-product green supply chain under government supervision with price and demand uncertainty
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Zamani, Soma
2018-05-01
In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.
Gini, Gianluca; Card, Noel A; Pozzoli, Tiziana
2018-03-01
This meta-analysis examined the associations between cyber-victimization and internalizing problems controlling for the occurrence of traditional victimization. Twenty independent samples with a total of 90,877 participants were included. Results confirmed the significant intercorrelation between traditional and cyber-victimization (r = .43). They both have medium-to-large bivariate correlations with internalizing problems. Traditional victimization (sr = .22) and cyber-victimization (sr = .12) were also uniquely related to internalizing problems. The difference in the relations between each type of victimization and internalizing problems was small (differential d = .06) and not statistically significant (p = .053). Moderation of these effect sizes by sample characteristics (e.g., age and proportion of girls) and study features (e.g., whether a definition of bullying was provided to participants and the time frame used as reference) was investigated. Results are discussed within the extant literature on cyber-aggression and cyber-victimization and future directions are proposed. © 2017 Wiley Periodicals, Inc.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Bhaskar, Anand; Song, Yun S
2014-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the "folded" SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes' rule of signs for polynomials to the Laplace transform of piecewise continuous functions.
Bhaskar, Anand; Song, Yun S.
2016-01-01
The sample frequency spectrum (SFS) is a widely-used summary statistic of genomic variation in a sample of homologous DNA sequences. It provides a highly efficient dimensional reduction of large-scale population genomic data and its mathematical dependence on the underlying population demography is well understood, thus enabling the development of efficient inference algorithms. However, it has been recently shown that very different population demographies can actually generate the same SFS for arbitrarily large sample sizes. Although in principle this nonidentifiability issue poses a thorny challenge to statistical inference, the population size functions involved in the counterexamples are arguably not so biologically realistic. Here, we revisit this problem and examine the identifiability of demographic models under the restriction that the population sizes are piecewise-defined where each piece belongs to some family of biologically-motivated functions. Under this assumption, we prove that the expected SFS of a sample uniquely determines the underlying demographic model, provided that the sample is sufficiently large. We obtain a general bound on the sample size sufficient for identifiability; the bound depends on the number of pieces in the demographic model and also on the type of population size function in each piece. In the cases of piecewise-constant, piecewise-exponential and piecewise-generalized-exponential models, which are often assumed in population genomic inferences, we provide explicit formulas for the bounds as simple functions of the number of pieces. Lastly, we obtain analogous results for the “folded” SFS, which is often used when there is ambiguity as to which allelic type is ancestral. Our results are proved using a generalization of Descartes’ rule of signs for polynomials to the Laplace transform of piecewise continuous functions. PMID:28018011
Occupancy in continuous habitat
Efford, Murray G.; Dawson, Deanna K.
2012-01-01
The probability that a site has at least one individual of a species ('occupancy') has come to be widely used as a state variable for animal population monitoring. The available statistical theory for estimation when detection is imperfect applies particularly to habitat patches or islands, although it is also used for arbitrary plots in continuous habitat. The probability that such a plot is occupied depends on plot size and home-range characteristics (size, shape and dispersion) as well as population density. Plot size is critical to the definition of occupancy as a state variable, but clear advice on plot size is missing from the literature on the design of occupancy studies. We describe models for the effects of varying plot size and home-range size on expected occupancy. Temporal, spatial, and species variation in average home-range size is to be expected, but information on home ranges is difficult to retrieve from species presence/absence data collected in occupancy studies. The effect of variable home-range size is negligible when plots are very large (>100 x area of home range), but large plots pose practical problems. At the other extreme, sampling of 'point' plots with cameras or other passive detectors allows the true 'proportion of area occupied' to be estimated. However, this measure equally reflects home-range size and density, and is of doubtful value for population monitoring or cross-species comparisons. Plot size is ill-defined and variable in occupancy studies that detect animals at unknown distances, the commonest example being unlimited-radius point counts of song birds. We also find that plot size is ill-defined in recent treatments of "multi-scale" occupancy; the respective scales are better interpreted as temporal (instantaneous and asymptotic) rather than spatial. Occupancy is an inadequate metric for population monitoring when it is confounded with home-range size or detection distance.
View of Pakistan Atomic Energy Commission towards SMPR's in the light of KANUPP performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huseini, S.D.
1985-01-01
The developing countries in general do not have grid capacities adequate enough to incorporate standard size, economic but rather large nuclear power plants for maximum advantage. Therefore, small and medium size reactors (SMPR) have been and still are, of particular interest to the developing countries in spite of certain known problems with these reactors. Pakistan Atomic Energy Commission (PAEC) has been operating a CANDU type of a small PHWR plant since 1971 when it was connected to the local Karachi grid. This paper describes PAEC's view in the light of KANUPP performance with respect to such factors associated with SMPR'smore » as selection of suitable reactor size and type, its operation in a grid of small capacity, flexibility of operation and its role as a reliable source of electrical power.« less
NASA Astrophysics Data System (ADS)
Rasthofer, U.; Wall, W. A.; Gravemeier, V.
2018-04-01
A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.
The rise of agrarian capitalism and the decline of family farming in England.
Shaw-Taylor, Leigh
2012-01-01
Historians have documented rising farm sizes throughout the period 1450–1850. Existing studies have revealed much about the mechanisms underlying the development of agrarian capitalism. However, we currently lack any consensus as to when the critical developments occurred. This is largely due to the absence of sufficiently large and geographically wide-ranging datasets but is also attributable to conceptual weaknesses in much of the literature. This article develops a new approach to the problem and argues that agrarian capitalism was dominant in southern and eastern England by 1700 but that in northern England the critical developments came later.
Tips for safety in endoscopic submucosal dissection for colorectal tumors
Naito, Yuji; Murakami, Takaaki; Hirose, Ryohei; Ogiso, Kiyoshi; Inada, Yutaka; Abdul Rani, Rafiz; Kishimoto, Mitsuo; Nakanishi, Masayoshi; Itoh, Yoshito
2017-01-01
In Japan, endoscopic submucosal dissection (ESD) becomes one of standard therapies for large colorectal tumors. Recently, the efficacy of ESD has been reported all over the world. However, it is still difficult even for Japanese experts in some situations. Right-sided location, large tumor size, high degree of fibrosis, difficult manipulation is related with the difficulty. However, improvements on ESD devices, suitable strategies, and increase of operators’ experiences enable us to solve these problems. In this chapter, we introduce recent topics about various difficult factors of colorectal ESD and the tips such as strategy, devices, injection, and traction method [Pocket-creation method (PCM) etc.]. PMID:28616400
CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres
NASA Astrophysics Data System (ADS)
Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli
2017-09-01
CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.
Richter, Jörg
2015-04-01
Methods to assess intervention progress and outcome for frequent use are needed. To provide preliminary information about psychometric properties for the Norwegian version of the Brief Problems Monitor. Cronbach's alpha scores and intra-class correlation coefficients as indicators for internal consistency (reliability) and Pearson correlation coefficients between corresponding subscales of the long and short ASEBA form versions as well as multiple regression coefficients to explore the predictive power of the reduced item-set related to the corresponding scale-scores of the long version were calculated in large, representative data sets of Norwegian children and adolescents. Cronbach's alpha scores of the Norwegian version of the BPM subscales varied between 0.67 (attention BPM-youth) and 0.88 (attention BPM-teacher) and between 0.90 (BPM-youth) and 0.96 (BPM-teacher) for its total problem score. Corresponding subscales from the long versions and the BPM as well as the total problems scores were closely correlated with coefficients of high effect size (all r > 0.80). The variance of the items of the BPM explained about three-quarters or more of the variance in the corresponding subscales of the long version. The Norwegian BPM has good psychometric properties in terms of 1) being acceptable to good internal consistency and in terms of 2) regression coefficients of high effect size from the BPM items to the problem-scale scores of the long versions as validity indicators. Its use in clinical practice and research can be recommended.
Managing Small Spacecraft Projects: Less is Not Easier
NASA Technical Reports Server (NTRS)
Barley, Bryan; Newhouse, Marilyn
2012-01-01
Managing small, low cost missions (class C or D) is not necessarily easier than managing a full flagship mission. Yet, small missions are typically considered easier to manage and used as a training ground for developing the next generation of project managers. While limited resources can be a problem for small missions, in reality most of the issues inherent in managing small projects are not the direct result of limited resources. Instead, problems encountered by managers of small spacecraft missions often derive from 1) the perception that managing small projects is easier if something is easier it needs less rigor and formality in execution, 2) the perception that limited resources necessitate or validate omitting standard management practices, 3) less stringent or unclear guidelines or policies for small projects, and 4) stakeholder expectations that are not consistent with the size and nature of the project. For example, the size of a project is sometimes used to justify not building a full, detailed integrated master schedule. However, while a small schedule slip may not be a problem for a large mission, it can indicate a serious problem for a small mission with a short development phase, highlighting the importance of the schedule for early identification of potential issues. Likewise, stakeholders may accept a higher risk posture early in the definition of a low-cost mission, but as launch approaches this acceptance may change. This presentation discusses these common misconceptions about managing small, low cost missions, the problems that can result, and possible solutions.
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-12-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
16 CFR 1120.3 - Products deemed to be substantial product hazards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... equivalent to sizes 2T to 16: (i) Garments in girls' size Large (L) and boys' size Large (L) are equivalent to girls' or boys' size 12, respectively. Garments in girls' and boys' sizes smaller than Large (L... range of 2T to 12. (ii) Garments in girls' size Extra-Large (XL) and boys' size Extra-Large (XL) are...
16 CFR § 1120.3 - Products deemed to be substantial product hazards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... equivalent to sizes 2T to 16: (i) Garments in girls' size Large (L) and boys' size Large (L) are equivalent to girls' or boys' size 12, respectively. Garments in girls' and boys' sizes smaller than Large (L... range of 2T to 12. (ii) Garments in girls' size Extra-Large (XL) and boys' size Extra-Large (XL) are...
16 CFR 1120.3 - Products deemed to be substantial product hazards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... equivalent to sizes 2T to 16: (i) Garments in girls' size Large (L) and boys' size Large (L) are equivalent to girls' or boys' size 12, respectively. Garments in girls' and boys' sizes smaller than Large (L... range of 2T to 12. (ii) Garments in girls' size Extra-Large (XL) and boys' size Extra-Large (XL) are...
A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.
Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon
2012-10-03
We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.
A scalable method for identifying frequent subtrees in sets of large phylogenetic trees
2012-01-01
Background We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. Results We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Conclusions Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses. PMID:23033843
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernán A.
2015-08-01
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Influence maximization in complex networks through optimal percolation.
Morone, Flaviano; Makse, Hernán A
2015-08-06
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotula, Paul Gabriel; Brozik, Susan Marie; Achyuthan, Komandoor E.
Engineered nanomaterials (ENMs) are increasingly being used in commercial products, particularly in the biomedical, cosmetic, and clothing industries. For example, pants and shirts are routinely manufactured with silver nanoparticles to render them 'wrinkle-free.' Despite the growing applications, the associated environmental health and safety (EHS) impacts are completely unknown. The significance of this problem became pervasive within the general public when Prince Charles authored an article in 2004 warning of the potential social, ethical, health, and environmental issues connected to nanotechnology. The EHS concerns, however, continued to receive relatively little consideration from federal agencies as compared with large investments in basicmore » nanoscience R&D. The mounting literature regarding the toxicology of ENMs (e.g., the ability of inhaled nanoparticles to cross the blood-brain barrier; Kwon et al., 2008, J. Occup. Health 50, 1) has spurred a recent realization within the NNI and other federal agencies that the EHS impacts related to nanotechnology must be addressed now. In our study we proposed to address critical aspects of this problem by developing primary correlations between nanoparticle properties and their effects on cell health and toxicity. A critical challenge embodied within this problem arises from the ability to synthesize nanoparticles with a wide array of physical properties (e.g., size, shape, composition, surface chemistry, etc.), which in turn creates an immense, multidimensional problem in assessing toxicological effects. In this work we first investigated varying sizes of quantum dots (Qdots) and their ability to cross cell membranes based on their aspect ratio utilizing hyperspectral confocal fluorescence microscopy. We then studied toxicity of epithelial cell lines that were exposed to different sized gold and silver nanoparticles using advanced imaging techniques, biochemical analyses, and optical and mass spectrometry methods. Finally we evaluated a new assay to measure transglutaminase (TG) activity; a potential marker for cell toxicity.« less
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Alem, Meseret; Enawgaw, Bamlaku
2014-01-01
Background. Anaemia is a global public health problem which has an eminence impact on pregnant mother. The aim of this study was to assess the prevalence and predictors of maternal anemia. Method. A cross-sectional study was conducted from March 1 to April 30, 2012, on 302 pregnant women who attended antenatal care at Gondar University Hospital. Interview-based questionnaire, clinical history, and laboratory tests were used to obtain data. Bivariate and multivariate logistic regression was used to identify predictors. Result. The prevalence of anemia was 16.6%. Majority were mild type (64%) and morphologically normocytic normochromic (76%) anemia. Anemia was high at third trimester (18.9%). Low family income (AOR [95% CI] = 3.1 [1.19, 8.33]), large family size (AOR [95% CI] = 4.14 [4.13, 10.52]), hookworm infection (AOR [95% CI] = 2.72 [1.04, 7.25]), and HIV infection (AOR [95% CI] = 5.75 [2.40, 13.69]) were independent predictors of anemia. Conclusion. The prevalence of anemia was high; mild type and normocytic normochromic anemia was dominant. Low income, large family size, hookworm infection, and HIV infection were associated with anemia. Hence, efforts should be made for early diagnosis and management of HIV and hookworm infection with special emphasis on those having low income and large family size. PMID:24669317
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Comparing NetCDF and SciDB on managing and querying 5D hydrologic dataset
NASA Astrophysics Data System (ADS)
Liu, Haicheng; Xiao, Xiao
2016-11-01
Efficiently extracting information from high dimensional hydro-meteorological modelling datasets requires smart solutions. Traditional methods are mostly based on files, which can be edited and accessed handily. But they have problems of efficiency due to contiguous storage structure. Others propose databases as an alternative for advantages such as native functionalities for manipulating multidimensional (MD) arrays, smart caching strategy and scalability. In this research, NetCDF file based solutions and the multidimensional array database management system (DBMS) SciDB applying chunked storage structure are benchmarked to determine the best solution for storing and querying 5D large hydrologic modelling dataset. The effect of data storage configurations including chunk size, dimension order and compression on query performance is explored. Results indicate that dimension order to organize storage of 5D data has significant influence on query performance if chunk size is very large. But the effect becomes insignificant when chunk size is properly set. Compression of SciDB mostly has negative influence on query performance. Caching is an advantage but may be influenced by execution of different query processes. On the whole, NetCDF solution without compression is in general more efficient than the SciDB DBMS.
Control of large space structures
NASA Technical Reports Server (NTRS)
Gran, R.; Rossi, M.; Moyer, H. G.; Austin, F.
1979-01-01
The control of large space structures was studied to determine what, if any, limitations are imposed on the size of spacecraft which may be controlled using current control system design technology. Using a typical structure in the 35 to 70 meter size category, a control system design that used actuators that are currently available was designed. The amount of control power required to maintain the vehicle in a stabilized gravity gradient pointing orientation that also damped various structural motions was determined. The moment of inertia and mass properties of this structure were varied to verify that stability and performance were maintained. The study concludes that the structure's size is required to change by at least a factor of two before any stability problems arise. The stability margin that is lost is due to the scaling of the gravity gradient torques (the rigid body control) and as such can easily be corrected by changing the control gains associated with the rigid body control. A secondary conclusion from the study is that the control design that accommodates the structural motions (to damp them) is a little more sensitive than the design that works on attitude control of the rigid body only.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
2009-05-01
debris removal without restoration is deployed. Conduct a controlled field study of restoration activity, for example, along the Wabash ...hazardous metals including chromium, cadmium , lead and mercury (MDEQ 2008; Thibodeau 2002). 3. Batteries in electronics and computers may contain lead...mercury, nickel and cadmium . Appliances Appliances are a problem mainly due to their large size, creating issues with loading, hauling, and
Fourier analysis of human soft tissue facial shape: sex differences in normal adults.
Ferrario, V F; Sforza, C; Schmitz, J H; Miani, A; Taroni, G
1995-01-01
Sexual dimorphism in human facial form involves both size and shape variations of the soft tissue structures. These variations are conventionally appreciated using linear and angular measurements, as well as ratios, taken from photographs or radiographs. Unfortunately this metric approach provides adequate quantitative information about size only, eluding the problems of shape definition. Mathematical methods such as the Fourier series allow a correct quantitative analysis of shape and of its changes. A method for the reconstruction of outlines starting from selected landmarks and for their Fourier analysis has been developed, and applied to analyse sex differences in shape of the soft tissue facial contour in a group of healthy young adults. When standardised for size, no sex differences were found between both cosine and sine coefficients of the Fourier series expansion. This shape similarity was largely overwhelmed by the very evident size differences and it could be measured only using the proper mathematical methods. PMID:8586558
Divided and Sliding Superficial Temporal Artery Flap for Primary Donor-site Closure
Sugio, Yuta; Seike, Shien; Hosokawa, Ko
2016-01-01
Summary: Superficial temporal artery (STA) flaps are often used for reconstruction of hair-bearing areas. However, primary closure of the donor site is not easy when the size of the necessary skin island is relatively large. In such cases, skin grafts are needed at the donor site, resulting in baldness. We have solved this issue by applying the divided and sliding flap technique, which was first reported for primary donor-site closure of a latissimus dorsi musculocutaneous flap. We applied this technique to the hair-bearing STA flap, where primary donor-site closure is extremely beneficial for preventing baldness consequent to skin grafting. The STA flap was divided into 3, and creation of large flap was possible. Therefore, we concluded that the divided and sliding STA flap could at least partially solve the donor-site problem. Although further investigation is necessary to validate the maximum possible flap size, this technique may be applicable to at least small defects that are common after skin cancer ablation or trauma. PMID:27975020
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi
2018-01-01
Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.
Best Bang for the Buck: Part 1 – The Size of Experiments Relative to Design Performance
Anderson-Cook, Christine Michaela; Lu, Lu
2016-10-01
There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less
Electrophoresis demonstration on Apollo 16
NASA Technical Reports Server (NTRS)
Snyder, R. S.
1972-01-01
Free fluid electrophoresis, a process used to separate particulate species according to surface charge, size, or shape was suggested as a promising technique to utilize the near zero gravity condition of space. Fluid electrophoresis on earth is disturbed by gravity-induced thermal convection and sedimentation. An apparatus was developed to demonstrate the principle and possible problems of electrophoresis on Apollo 14 and the separation boundary between red and blue dye was photographed in space. The basic operating elements of the Apollo 14 unit were used for a second flight demonstration on Apollo 16. Polystyrene latex particles of two different sizes were used to simulate the electrophoresis of large biological particles. The particle bands in space were extremely stable compared to ground operation because convection in the fluid was negligible. Electrophoresis of the polystyrene latex particle groups according to size was accomplished although electro-osmosis in the flight apparatus prevented the clear separation of two particle bands.
Strategy alternatives for homeland air and cruise missile defense.
Murphy, Eric M; Payne, Michael D; Vanderwoude, Glenn W
2010-10-01
Air and cruise missile defense of the U.S. homeland is characterized by a requirement to protect a large number of critical assets nonuniformly dispersed over a vast area with relatively few defensive systems. In this article, we explore strategy alternatives to make the best use of existing defense resources and suggest this approach as a means of reducing risk while mitigating the cost of developing and acquiring new systems. We frame the issue as an attacker-defender problem with simultaneous moves. First, we outline and examine the relatively simple problem of defending comparatively few locations with two surveillance systems. Second, we present our analysis and findings for a more realistic scenario that includes a representative list of U.S. critical assets. Third, we investigate sensitivity to defensive strategic choices in the more realistic scenario. As part of this investigation, we describe two complementary computational methods that, under certain circumstances, allow one to reduce large computational problems to a more manageable size. Finally, we demonstrate that strategic choices can be an important supplement to material solutions and can, in some cases, be a more cost-effective alternative. © 2010 Society for Risk Analysis.
Application of PDSLin to the magnetic reconnection problem
NASA Astrophysics Data System (ADS)
Yuan, Xuefei; Li, Xiaoye S.; Yamazaki, Ichitaro; Jardin, Stephen C.; Koniges, Alice E.; Keyes, David E.
2013-01-01
Magnetic reconnection is a fundamental process in a magnetized plasma at both low and high magnetic Lundquist numbers (the ratio of the resistive diffusion time to the Alfvén wave transit time), which occurs in a wide variety of laboratory and space plasmas, e.g. magnetic fusion experiments, the solar corona and the Earth's magnetotail. An implicit time advance for the two-fluid magnetic reconnection problem is known to be difficult because of the large condition number of the associated matrix. This is especially troublesome when the collisionless ion skin depth is large so that the Whistler waves, which cause the fast reconnection, dominate the physics (Yuan et al 2012 J. Comput. Phys. 231 5822-53). For small system sizes, a direct solver such as SuperLU can be employed to obtain an accurate solution as long as the condition number is bounded by the reciprocal of the floating-point machine precision. However, SuperLU scales effectively only to hundreds of processors or less. For larger system sizes, it has been shown that physics-based (Chacón and Knoll 2003 J. Comput. Phys. 188 573-92) or other preconditioners can be applied to provide adequate solver performance. In recent years, we have been developing a new algebraic hybrid linear solver, PDSLin (Parallel Domain decomposition Schur complement-based Linear solver) (Yamazaki and Li 2010 Proc. VECPAR pp 421-34 and Yamazaki et al 2011 Technical Report). In this work, we compare numerical results from a direct solver and the proposed hybrid solver for the magnetic reconnection problem and demonstrate that the new hybrid solver is scalable to thousands of processors while maintaining the same robustness as a direct solver.
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)
Cross-Identification of Astronomical Catalogs on Multiple GPUs
NASA Astrophysics Data System (ADS)
Lee, M. A.; Budavári, T.
2013-10-01
One of the most fundamental problems in observational astronomy is the cross-identification of sources. Observations are made in different wavelengths, at different times, and from different locations and instruments, resulting in a large set of independent observations. The scientific outcome is often limited by our ability to quickly perform meaningful associations between detections. The matching, however, is difficult scientifically, statistically, as well as computationally. The former two require detailed physical modeling and advanced probabilistic concepts; the latter is due to the large volumes of data and the problem's combinatorial nature. In order to tackle the computational challenge and to prepare for future surveys, whose measurements will be exponentially increasing in size past the scale of feasible CPU-based solutions, we developed a new implementation which addresses the issue by performing the associations on multiple Graphics Processing Units (GPUs). Our implementation utilizes up to 6 GPUs in combination with the Thrust library to achieve an over 40x speed up verses the previous best implementation running on a multi-CPU SQL Server.
An Energy-Efficient Mobile Sink-Based Unequal Clustering Mechanism for WSNs.
Gharaei, Niayesh; Abu Bakar, Kamalrulnizam; Mohd Hashim, Siti Zaiton; Hosseingholi Pourasl, Ali; Siraj, Mohammad; Darwish, Tasneem
2017-08-11
Network lifetime and energy efficiency are crucial performance metrics used to evaluate wireless sensor networks (WSNs). Decreasing and balancing the energy consumption of nodes can be employed to increase network lifetime. In cluster-based WSNs, one objective of applying clustering is to decrease the energy consumption of the network. In fact, the clustering technique will be considered effective if the energy consumed by sensor nodes decreases after applying clustering, however, this aim will not be achieved if the cluster size is not properly chosen. Therefore, in this paper, the energy consumption of nodes, before clustering, is considered to determine the optimal cluster size. A two-stage Genetic Algorithm (GA) is employed to determine the optimal interval of cluster size and derive the exact value from the interval. Furthermore, the energy hole is an inherent problem which leads to a remarkable decrease in the network's lifespan. This problem stems from the asynchronous energy depletion of nodes located in different layers of the network. For this reason, we propose Circular Motion of Mobile-Sink with Varied Velocity Algorithm (CM2SV2) to balance the energy consumption ratio of cluster heads (CH). According to the results, these strategies could largely increase the network's lifetime by decreasing the energy consumption of sensors and balancing the energy consumption among CHs.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
A case study of middle size floating airports for shallower and deeper waters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Koichiro; Suzuki, Hideyuki; Nishigaki, Makoto
1996-12-31
Demands for large and middle size airports are expanding in Japan with continuous increase of air transportation. However these demands will not be satisfied without effective ocean space utilization. Most of the wide and shallower waters suitable for reclamation have already been reclaimed. Furthermore those shallower waters are generally close to the residential area, and noise and environmental problems will be caused if they were used for airports. Deeper waters, which are relatively distant from the shore, are suitable for airport but reclamation of these waters are extremely difficult. This paper presents a structural planning of an open sea typemore » middle size floating airport to promote local economy and also improve transportation infrastructure of isolated islands. The airports of this plan are a semisubmersible type floating structure with a relatively thin deck, a number of slender columns and large size lower hulls. The floating structure is moored by inclined tension legs to restrain the motion. The diameter of the leg becomes much larger compared with the legs of existing tension leg platforms. Parameters related to the configuration of the floating structure and the mooring system are determined by comparing analyses results with the proper design criteria. Several kinds of static and dynamic computer programs are used in the planning. The proposed structural plan and the mooring system are considered as a typical floating airport appropriate for the open sea.« less
Anisotropic modulus stabilisation: strings at LHC scales with micron-sized extra dimensions
NASA Astrophysics Data System (ADS)
Cicoli, M.; Burgess, C. P.; Quevedo, F.
2011-10-01
We construct flux-stabilised Type IIB string compactifications whose extra dimensions have very different sizes, and use these to describe several types of vacua with a TeV string scale. Because we can access regimes where two dimensions are hierarchically larger than the other four, we find examples where two dimensions are micron-sized while the other four are at the weak scale in addition to more standard examples with all six extra dimensions equally large. Besides providing ultraviolet completeness, the phenomenology of these models is richer than vanilla large-dimensional models in several generic ways: ( i) they are supersymmetric, with supersymmetry broken at sub-eV scales in the bulk but only nonlinearly realised in the Standard Model sector, leading to no MSSM superpartners for ordinary particles and many more bulk missing-energy channels, as in supersymmetric large extra dimensions (SLED); ( ii) small cycles in the more complicated extra-dimensional geometry allow some KK states to reside at TeV scales even if all six extra dimensions are nominally much larger; ( iii) a rich spectrum of string and KK states at TeV scales; and ( iv) an equally rich spectrum of very light moduli exist having unusually small (but technically natural) masses, with potentially interesting implications for cosmology and astrophysics that nonetheless evade new-force constraints. The hierarchy problem is solved in these models because the extra-dimensional volume is naturally stabilised at exponentially large values: the extra dimensions are Calabi-Yau geometries with a 4D K3 or T 4-fibration over a 2D base, with moduli stabilised within the well-established LARGE-Volume scenario. The new technical step is the use of poly-instanton corrections to the superpotential (which, unlike for simpler models, are likely to be present on K3 or T 4-fibered Calabi-Yau compactifications) to obtain a large hierarchy between the sizes of different dimensions. For several scenarios we identify the low-energy spectrum and briefly discuss some of their astrophysical, cosmological and phenomenological implications.
Carkovic, Athena B; Pastén, Pablo A; Bonilla, Carlos A
2015-04-15
Water erosion is a leading cause of soil degradation and a major nonpoint source pollution problem. Many efforts have been undertaken to estimate the amount and size distribution of the sediment leaving the field. Multi-size class water erosion models subdivide eroded soil into different sizes and estimate the aggregate's composition based on empirical equations derived from agricultural soils. The objective of this study was to evaluate these equations on soil samples collected from natural landscapes (uncultivated) and fire-affected soils. Chemical, physical, and soil fractions and aggregate composition analyses were performed on samples collected in the Chilean Patagonia and later compared with the equations' estimates. The results showed that the empirical equations were not suitable for predicting the sediment fractions. Fine particles, including primary clay, primary silt, and small aggregates (<53 μm) were over-estimated, and large aggregates (>53 μm) and primary sand were under-estimated. The uncultivated and fire-affected soils showed a reduced fraction of fine particles in the sediment, as clay and silt were mostly in the form of large aggregates. Thus, a new set of equations was developed for these soils, where small aggregates were defined as particles with sizes between 53 μm and 250 μm and large aggregates as particles>250 μm. With r(2) values between 0.47 and 0.98, the new equations provided better estimates for primary sand and large aggregates. The aggregate's composition was also well predicted, especially the silt and clay fractions in the large aggregates from uncultivated soils (r(2)=0.63 and 0.83, respectively) and the fractions of silt in the small aggregates (r(2)=0.84) and clay in the large aggregates (r(2)=0.78) from fire-affected soils. Overall, these new equations proved to be better predictors for the sediment and aggregate's composition in uncultivated and fire-affected soils, and they reduce the error when estimating soil loss in natural landscapes. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel computation with molecular-motor-propelled agents in nanofabricated networks.
Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V
2016-03-08
The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.
A CCD experimental platform for large telescope in Antarctica based on FPGA
NASA Astrophysics Data System (ADS)
Zhu, Yuhua; Qi, Yongjun
2014-07-01
The CCD , as a detector , is one of the important components of astronomical telescopes. For a large telescope in Antarctica, a set of CCD detector system with large size, high sensitivity and low noise is indispensable. Because of the extremely low temperatures and unattended, system maintenance and software and hardware upgrade become hard problems. This paper introduces a general CCD controller experiment platform, using Field programmable gate array FPGA, which is, in fact, a large-scale field reconfigurable array. Taking the advantage of convenience to modify the system, construction of driving circuit, digital signal processing module, network communication interface, control algorithm validation, and remote reconfigurable module may realize. With the concept of integrated hardware and software, the paper discusses the key technology of building scientific CCD system suitable for the special work environment in Antarctica, focusing on the method of remote reconfiguration for controller via network and then offering a feasible hardware and software solution.
Anhydrite EOS and Phase Diagram in Relation to Shock Decomposition
NASA Technical Reports Server (NTRS)
Ivanov, B. A.; Langenhorst, F.; Deutsch, A.; Hornemann, U.
2004-01-01
In the context of the Chicxulub impact, it became recently obvious that experimental and theoretical research on the shock behavior of sulfates is essential for an assessment of the role of shock-released gases in the K/T mass extinction. The Chicxulub crater is the most important large impact structure where the bolide penetrated a sedimentary layer with large amounts of interbedded anhydrite (Haughton has also significant anhydrite in the target). The sulfuric gas production by shock compression/decompression of anhydrite is an important issue, even if the size of Chicxulub crater is only half of the so far assumed size. The comparison of experimental data for anhydrite, shocked with different techniques at various laboratories, reveals large differences in the threshold pressures for melting and decomposition. To gain insight into this issue, we have made a theoretical investigation of the thermodynamic properties of anhydrite. The project includes the review of data published in the last 40 years - reasons to study anhydrite cover a wide field of interests: from industrial problems of cement and ceramic production to the analysis of nuclear underground explosions in salt domes, conducted in the USA and USSR in the 1970th.
Jackin, Boaz Jessie; Watanabe, Shinpei; Ootsu, Kanemitsu; Ohkawa, Takeshi; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu
2018-04-20
A parallel computation method for large-size Fresnel computer-generated hologram (CGH) is reported. The method was introduced by us in an earlier report as a technique for calculating Fourier CGH from 2D object data. In this paper we extend the method to compute Fresnel CGH from 3D object data. The scale of the computation problem is also expanded to 2 gigapixels, making it closer to real application requirements. The significant feature of the reported method is its ability to avoid communication overhead and thereby fully utilize the computing power of parallel devices. The method exhibits three layers of parallelism that favor small to large scale parallel computing machines. Simulation and optical experiments were conducted to demonstrate the workability and to evaluate the efficiency of the proposed technique. A two-times improvement in computation speed has been achieved compared to the conventional method, on a 16-node cluster (one GPU per node) utilizing only one layer of parallelism. A 20-times improvement in computation speed has been estimated utilizing two layers of parallelism on a very large-scale parallel machine with 16 nodes, where each node has 16 GPUs.
Water insoluble and soluble lipids for gene delivery.
Mahato, Ram I
2005-04-05
Among various synthetic gene carriers currently in use, liposomes composed of cationic lipids and co-lipids remain the most efficient transfection reagents. Physicochemical properties of lipid/plasmid complexes, such as cationic lipid structure, cationic lipid to co-lipid ratio, charge ratio, particle size and zeta potential have significant influence on gene expression and biodistribution. However, most cationic lipids are toxic and cationic liposomes/plasmid complexes do not disperse well inside the target tissues because of their large particle size. To overcome the problems associated with cationic lipids, we designed water soluble lipopolymers for gene delivery to various cells and tissues. This review provides a critical discussion on how the components of water insoluble and soluble lipids affect their transfection efficiency and biodistribution of lipid/plasmid complexes.
Electronic shift register memory based on molecular electron-transfer reactions
NASA Technical Reports Server (NTRS)
Hopfield, J. J.; Onuchic, Jose Nelson; Beratan, David N.
1989-01-01
The design of a shift register memory at the molecular level is described in detail. The memory elements are based on a chain of electron-transfer molecules incorporated on a very large scale integrated (VLSI) substrate, and the information is shifted by photoinduced electron-transfer reactions. The design requirements for such a system are discussed, and several realistic strategies for synthesizing these systems are presented. The immediate advantage of such a hybrid molecular/VLSI device would arise from the possible information storage density. The prospect of considerable savings of energy per bit processed also exists. This molecular shift register memory element design solves the conceptual problems associated with integrating molecular size components with larger (micron) size features on a chip.
MBE Growth of HgCdTe on Large-Area Si and CdZnTe Wafers for SWIR, MWIR and LWIR Detection
NASA Astrophysics Data System (ADS)
Reddy, M.; Peterson, J. M.; Lofgreen, D. D.; Franklin, J. A.; Vang, T.; Smith, E. P. G.; Wehner, J. G. A.; Kasai, I.; Bangs, J. W.; Johnson, S. M.
2008-09-01
Molecular beam epitaxy (MBE) growth of HgCdTe on large-size Si (211) and CdZnTe (211)B substrates is critical to meet the demands of extremely uniform and highly functional third-generation infrared (IR) focal-panel arrays (FPAs). We have described here the importance of wafer maps of HgCdTe thickness, composition, and the macrodefects across the wafer not only to qualify material properties against design specifications but also to diagnose and classify the MBE-growth-related issues on large-area wafers. The paper presents HgCdTe growth with exceptionally uniform composition and thickness and record low macrodefect density on large Si wafers up to 6-in in diameter for the detection of short-wave (SW), mid-wave (MW), and long-wave (LW) IR radiation. We have also proposed a cost-effective approach to use the growth of HgCdTe on low-cost Si substrates to isolate the growth- and substrate-related problems that one occasionally comes across with the CdZnTe substrates and tune the growth parameters such as growth rate, cutoff wavelength ( λ cutoff) and doping parameters before proceeding with the growth on costly large-area CdZnTe substrates. In this way, we demonstrated HgCdTe growth on large CdZnTe substrates of size 7 cm × 7 cm with excellent uniformity and low macrodefect density.
2010-01-01
Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size. PMID:20565983
Simultaneous optimization of loading pattern and burnable poison placement for PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alim, F.; Ivanov, K.; Yilmaz, S.
2006-07-01
To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less
Compton imaging tomography technique for NDE of large nonuniform structures
NASA Astrophysics Data System (ADS)
Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz
2011-09-01
In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Geophysical Anomalies and Earthquake Prediction
NASA Astrophysics Data System (ADS)
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.
Simultaneous personnel and vehicle shift scheduling in the waste management sector.
Ghiani, Gianpaolo; Guerriero, Emanuela; Manni, Andrea; Manni, Emanuele; Potenza, Agostino
2013-07-01
Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the definition of shift schedules for both personnel and vehicles. This activity has a great incidence on the tactical and operational cost for companies. In this paper, we propose an integer programming model to find an optimal solution to the integrated problem. The aim is to determine optimal schedules at minimum cost. Moreover, we design a fast and effective heuristic to face large-size problems. Both approaches are tested on data from a real-world case in Southern Italy and compared to the current practice utilized by the company managing the service, showing that simultaneously solving these problems can lead to significant monetary savings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
Layout optimization with algebraic multigrid methods
NASA Technical Reports Server (NTRS)
Regler, Hans; Ruede, Ulrich
1993-01-01
Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.
NASA Astrophysics Data System (ADS)
Wang, Jun; Wang, Yang; Zeng, Hui
2016-01-01
A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.
Deighton, Jessica; Humphrey, Neil; Belsky, Jay; Boehnke, Jan; Vostanis, Panos; Patalay, Praveetha
2018-03-01
There is a growing appreciation that child functioning in different domains, levels, or systems are interrelated over time. Here, we investigate links between internalizing symptoms, externalizing problems, and academic attainment during middle childhood and early adolescence, drawing on two large data sets (child: mean age 8.7 at enrolment, n = 5,878; adolescent: mean age 11.7, n = 6,388). Using a 2-year cross-lag design, we test three hypotheses - adjustment erosion, academic incompetence, and shared risk - while also examining the moderating influence of gender. Multilevel structural equation models provided consistent evidence of the deleterious effect of externalizing problems on later academic achievement in both cohorts, supporting the adjustment-erosion hypothesis. Evidence supporting the academic-incompetence hypothesis was restricted to the middle childhood cohort, revealing links between early academic failure and later internalizing symptoms. In both cohorts, inclusion of shared-risk variables improved model fit and rendered some previously established cross-lag pathways non-significant. Implications of these findings are discussed, and study strengths and limitations noted. Statement of contribution What is already known on this subject? Longitudinal research and in particular developmental cascades literature make the case for weaker associations between internalizing symptoms and academic performance than between externalizing problems and academic performance. Findings vary in terms of the magnitude and inferred direction of effects. Inconsistencies may be explained by different age ranges, prevalence of small-to-modest sample sizes, and large time lags between measurement points. Gender differences remain underexamined. What does this study add? The present study used cross-lagged models to examine longitudinal associations in age groups (middle child and adolescence) in a large-scale British sample. The large sample size not only allows for improvements on previous measurement models (e.g., allowing the analysis to account for nesting, and estimation of latent variables) but also allows for examination of gender differences. The findings clarify the role of shared-risk factors in accounting for associations between internalizing, externalizing, and academic performance, by demonstrating that shared-risk factors do not fully account for relationships between internalizing, externalizing, and academic achievement. Specifically, some pathways between mental health and academic attainment consistently remain, even after shared-risk variables have been accounted for. Findings also present consistent support for the potential impact of behavioural problems on children's academic attainment. The negative relationship between low academic attainment and subsequent internalizing symptoms for younger children is also noteworthy. © 2017 The British Psychological Society.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
On anthropic solutions of the cosmological constant problem
NASA Astrophysics Data System (ADS)
Banks, Tom; Dine, Michael; Motl, Lubos
2001-01-01
Motivated by recent work of Bousso and Polchinski (BP), we study theories which explain the small value of the cosmological constant using the anthropic principle. We argue that simultaneous solution of the gauge hierarchy problem is a strong constraint on any such theory. We exhibit three classes of models which satisfy these constraints. The first is a version of the BP model with precisely two large dimensions. The second involves 6-branes and antibranes wrapped on supersymmetric 3-cycles of Calabi-Yau manifolds, and the third is a version of the irrational axion model. All of them have possible problems in explaining the size of microwave background fluctuations. We also find that most models of this type predict that all constants in the low energy lagrangian, as well as the gauge groups and representation content, are chosen from an ensemble and cannot be uniquely determined from the fundamental theory. In our opinion, this significantly reduces the appeal of this kind of solution of the cosmological constant problem. On the other hand, we argue that the vacuum selection problem of string theory might plausibly have an anthropic, cosmological solution.
Fair and efficient network congestion control based on minority game
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang
2011-12-01
Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.
Integrating deliberative planning in a robot architecture
NASA Technical Reports Server (NTRS)
Elsaesser, Chris; Slack, Marc G.
1994-01-01
The role of planning and reactive control in an architecture for autonomous agents is discussed. The postulated architecture seperates the general robot intelligence problem into three interacting pieces: (1) robot reactive skills, i.e., grasping, object tracking, etc.; (2) a sequencing capability to differentially ativate the reactive skills; and (3) a delibrative planning capability to reason in depth about goals, preconditions, resources, and timing constraints. Within the sequencing module, caching techniques are used for handling routine activities. The planning system then builds on these cached solutions to routine tasks to build larger grain sized primitives. This eliminates large numbers of essentially linear planning problems. The architecture will be used in the future to incorporate in robots cognitive capabilites normally associated with intelligent behavior.
NASA Astrophysics Data System (ADS)
Kutzleb, C. D.
1997-02-01
The high incidence of recidivism (repeat offenders) in the criminal population makes the use of the IAFIS III/FBI criminal database an important tool in law enforcement. The problems and solutions employed by IAFIS III/FBI criminal subject searches are discussed for the following topics: (1) subject search selectivity and reliability; (2) the difficulty and limitations of identifying subjects whose anonymity may be a prime objective; (3) database size, search workload, and search response time; (4) techniques and advantages of normalizing the variability in an individual's name and identifying features into identifiable and discrete categories; and (5) the use of database demographics to estimate the likelihood of a match between a search subject and database subjects.
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Mathematical models for exploring different aspects of genotoxicity and carcinogenicity databases.
Benigni, R; Giuliani, A
1991-12-01
One great obstacle to understanding and using the information contained in the genotoxicity and carcinogenicity databases is the very size of such databases. Their vastness makes them difficult to read; this leads to inadequate exploitation of the information, which becomes costly in terms of time, labor, and money. In its search for adequate approaches to the problem, the scientific community has, curiously, almost entirely neglected an existent series of very powerful methods of data analysis: the multivariate data analysis techniques. These methods were specifically designed for exploring large data sets. This paper presents the multivariate techniques and reports a number of applications to genotoxicity problems. These studies show how biology and mathematical modeling can be combined and how successful this combination is.
Analytic Theory and Control of the Motion of Spinning Rigid Bodies
NASA Technical Reports Server (NTRS)
Tsiotras, Panagiotis
1993-01-01
Numerical simulations are often resorted to, in order to understand the attitude response and control characteristics of a rigid body. However, this approach in performing sensitivity and/or error analyses may be prohibitively expensive and time consuming, especially when a large number of problem parameters are involved. Thus, there is an important role for analytical models in obtaining an understanding of the complex dynamical behavior. In this dissertation, new analytic solutions are derived for the complete attitude motion of spinning rigid bodies, under minimal assumptions. Hence, we obtain the most general solutions reported in the literature so far. Specifically, large external torques and large asymmetries are included in the problem statement. Moreover, problems involving large angular excursions are treated in detail. A new tractable formulation of the kinematics is introduced which proves to be extremely helpful in the search for analytic solutions of the attitude history of such kinds of problems. The main utility of the new formulation becomes apparent however, when searching for feedback control laws for stabilization and/or reorientation of spinning spacecraft. This is an inherently nonlinear problem, where standard linear control techniques fail. We derive a class of control laws for spin axis stabilization of symmetric spacecraft using only two pairs of gas jet actuators. Practically, this could correspond to a spacecraft operating in failure mode, for example. Theoretically, it is also an important control problem which, because of its difficulty, has received little, if any, attention in the literature. The proposed control laws are especially simple and elegant. A feedback control law that achieves arbitrary reorientation of the spacecraft is also derived, using ideas from invariant manifold theory. The significance of this research is twofold. First, it provides a deeper understanding of the fundamental behavior of rigid bodies subject to body-fixed torques. Assessment of the analytic solutions reveals that they are very accurate; for symmetric bodies the solutions of Euler's equations of motion are, in fact, exact. Second, the results of this research have a fundamental impact on practical scientific and mechanical applications in terms of the analysis and control of all finite-sized rigid bodies ranging from nanomachines to very large bodies, both man made and natural. After all, Euler's equations of motion apply to all physical bodies, barring only the extreme limits of quantum mechanics and relativity.
A progress report on seismic model studies
Healy, J.H.; Mangan, G.B.
1963-01-01
The value of seismic-model studies as an aid to understanding wave propagation in the Earth's crust was recognized by early investigators (Tatel and Tuve, 1955). Preliminary model results were very promising, but progress in model seismology has been restricted by two problems: (1) difficulties in the development of models with continuously variable velocity-depth functions, and (2) difficulties in the construction of models of adequate size to provide a meaningful wave-length to layer-thickness ratio. The problem of a continuously variable velocity-depth function has been partly solved by a technique using two-dimensional plate models constructed by laminating plastic to aluminum, so that the ratio of plastic to aluminum controls the velocity-depth function (Healy and Press, 1960). These techniques provide a continuously variable velocity-depth function, but it is not possible to construct such models large enough to study short-period wave propagation in the crust. This report describes improvements in our ability to machine large models. Two types of models are being used: one is a cylindrical aluminum tube machined on a lathe, and the other is a large plate machined on a precision planer. Both of these modeling techniques give promising results and are a significant improvement over earlier efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt
Significant problems facing all experimental andcomputationalsciences arise from growing data size and complexity. Commonto allthese problems is the need to perform efficient data I/O ondiversecomputer architectures. In our scientific application, thelargestparallel particle simulations generate vast quantitiesofsix-dimensional data. Such a simulation run produces data foranaggregate data size up to several TB per run. Motived by the needtoaddress data I/O and access challenges, we have implemented H5Part,anopen source data I/O API that simplifies the use of the HierarchicalDataFormat v5 library (HDF5). HDF5 is an industry standard forhighperformance, cross-platform data storage and retrieval that runsonall contemporary architectures from large parallel supercomputerstolaptops. H5Part, whichmore » is oriented to the needs of the particlephysicsand cosmology communities, provides support for parallelstorage andretrieval of particles, structured and in the future unstructuredmeshes.In this paper, we describe recent work focusing on I/O supportforparticles and structured meshes and provide data showing performance onmodernsupercomputer architectures like the IBM POWER 5.« less
Solution to the problem of the poor cyclic fatigue resistance of bulk metallic glasses
Launey, Maximilien E.; Hofmann, Douglas C.; Johnson, William L.; Ritchie, Robert O.
2009-01-01
The recent development of metallic glass-matrix composites represents a particular milestone in engineering materials for structural applications owing to their remarkable combination of strength and toughness. However, metallic glasses are highly susceptible to cyclic fatigue damage, and previous attempts to solve this problem have been largely disappointing. Here, we propose and demonstrate a microstructural design strategy to overcome this limitation by matching the microstructural length scales (of the second phase) to mechanical crack-length scales. Specifically, semisolid processing is used to optimize the volume fraction, morphology, and size of second-phase dendrites to confine any initial deformation (shear banding) to the glassy regions separating dendrite arms having length scales of ≈2 μm, i.e., to less than the critical crack size for failure. Confinement of the damage to such interdendritic regions results in enhancement of fatigue lifetimes and increases the fatigue limit by an order of magnitude, making these “designed” composites as resistant to fatigue damage as high-strength steels and aluminum alloys. These design strategies can be universally applied to any other metallic glass systems. PMID:19289820
On a Heat Exchange Problem under Sharply Changing External Conditions
NASA Astrophysics Data System (ADS)
Khishchenko, K. V.; Charakhch'yan, A. A.; Shurshalov, L. V.
2018-02-01
The heat exchange problem between carbon particles and an external environment (water) is stated and investigated based on the equations of heat conducting compressible fluid. The environment parameters are supposed to undergo large and fast variations. In the time of about 100 μs, the temperature of the environment first increases from the normal one to 2400 K, is preserved at this level for about 60 μs, and then decreases to 300 K during approximately 50 μs. At the same periods of time, the pressure of the external environment increases from the normal one to 67 GPa, is preserved at this level, and then decreases to zero. Under such external conditions, the heating of graphite particles of various sizes, their phase transition to the diamond phase, and the subsequent unloading and cooling almost to the initial values of the pressure and temperature without the reverse transition from the diamond to the graphite phase are investigated. Conclusions about the maximal size of diamond particles that can be obtained in experiments on the shock compression of the mixture of graphite with water are drawn.
Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.
1999-10-14
Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less
Requirement analysis to promote small-sized E-waste collection from consumers.
Mishima, Kuniko; Nishimura, Hidekazu
2016-02-01
The collection and recycling of small-sized waste electrical and electronic equipment is an emerging problem, since these products contain certain amounts of critical metals and rare earths. Even if the amount is not large, having a few supply routes for such recycled resources could be a good strategy to be competitive in a world of finite resources. The small-sized e-waste sometimes contains personal information, therefore, consumers are often reluctant to put them into recycling bins. In order to promote the recycling of E-waste, collection of used products from the consumer becomes important. Effective methods involving incentives for consumers might be necessary. Without such methods, it will be difficult to achieve the critical amounts necessary for an efficient recycling system. This article focused on used mobile phones among information appliances as the first case study, since it contains relatively large amounts of valuable metals compared with other small-sized waste electrical and electronic equipment and there are a large number of products existing in the market. The article carried out surveys to determine what kind of recycled material collection services are preferred by consumers. The results clarify that incentive or reward money alone is not a driving force for recycling behaviour. The article discusses the types of effective services required to promote recycling behaviour. The article concludes that securing information, transferring data and providing proper information about resources and environment can be an effective tool to encourage a recycling behaviour strategy to promote recycling, plus the potential discount service on purchasing new products associated with the return of recycled mobile phones. © The Author(s) 2015.
Large-scale numerical simulations of polydisperse particle flow in a silo
NASA Astrophysics Data System (ADS)
Rubio-Largo, S. M.; Maza, D.; Hidalgo, R. C.
2017-10-01
Very recently, we have examined experimentally and numerically the micro-mechanical details of monodisperse particle flows through an orifice placed at the bottom of a silo (Rubio-Largo et al. in Phys Rev Lett 114:238002, 2015). Our findings disentangled the paradoxical ideas associated to the free-fall arch concept, which has historically served to justify the dependence of the flow rate on the outlet size. In this work, we generalize those findings examining large-scale polydisperse particle flows in silos. In the range of studied apertures, both velocity and density profiles at the aperture are self-similar, and the obtained scaling functions confirm that the relevant scale of the problem is the size of the aperture. Moreover, we find that the contact stress monotonically decreases when the particles approach the exit and vanish at the outlet. The behavior of this magnitude is practically independent of the size of the orifice. However, the total and partial kinetic stress profiles suggest that the outlet size controls the propagation of the velocity fluctuations inside the silo. Examining this magnitude, we conclusively argue that indeed there is a well-defined transition region where the particle flow changes its nature. The general trend of the partial kinetic pressure profiles and the location of the transition region results the same for all particle types. We find that the partial kinetic stress is larger for bigger particles. However, the small particles carry a higher fraction of kinetic stress respect to their concentration, which suggest that the small particles have larger velocity fluctuations than the large ones and showing lower strength of correlation with the global flow. Our outcomes explain why the free-fall arch picture has served to describe the polydisperse flow rate in the discharge of silos.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
NASA Astrophysics Data System (ADS)
Steinberg, Marc
2011-06-01
This paper presents a selective survey of theoretical and experimental progress in the development of biologicallyinspired approaches for complex surveillance and reconnaissance problems with multiple, heterogeneous autonomous systems. The focus is on approaches that may address ISR problems that can quickly become mathematically intractable or otherwise impractical to implement using traditional optimization techniques as the size and complexity of the problem is increased. These problems require dealing with complex spatiotemporal objectives and constraints at a variety of levels from motion planning to task allocation. There is also a need to ensure solutions are reliable and robust to uncertainty and communications limitations. First, the paper will provide a short introduction to the current state of relevant biological research as relates to collective animal behavior. Second, the paper will describe research on largely decentralized, reactive, or swarm approaches that have been inspired by biological phenomena such as schools of fish, flocks of birds, ant colonies, and insect swarms. Next, the paper will discuss approaches towards more complex organizational and cooperative mechanisms in team and coalition behaviors in order to provide mission coverage of large, complex areas. Relevant team behavior may be derived from recent advances in understanding of the social and cooperative behaviors used for collaboration by tens of animals with higher-level cognitive abilities such as mammals and birds. Finally, the paper will briefly discuss challenges involved in user interaction with these types of systems.
Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun
2011-07-01
Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.
NASA Technical Reports Server (NTRS)
Joshi, R. P.; Deshpande, M. D. (Technical Monitor)
2003-01-01
A study into the problem of determining electromagnetic solutions at high frequencies for problems involving complex geometries, large sizes and multiple sources (e.g. antennas) has been initiated. Typical applications include the behavior of antennas (and radiators) installed on complex conducting structures (e.g. ships, aircrafts, etc..) with strong interactions between antennas, the radiation patterns, and electromagnetic signals is of great interest for electromagnetic compatibility control. This includes the overall performance evaluation and control of all on-board radiating systems, electromagnetic interference, and personnel radiation hazards. Electromagnetic computational capability exists at NASA LaRC, and many of the codes developed are based on the Moment Method (MM). However, the MM is computationally intensive, and this places a limit on the size of objects and structures that can be modeled. Here, two approaches are proposed: (i) a current-based hybrid scheme that combines the MM with Physical optics, and (ii) an Alternating Direction Implicit-Finite Difference Time Domain (ADI-FDTD) method. The essence of a hybrid technique is to split the overall scattering surface(s) into two regions: (a) a MM zone (MMZ) which can be used over any part of the given geometry, but is most essential over irregular and "non-smooth" geometries, and (b) a PO sub-region (POSR). Currents induced on the scattering and reflecting surfaces can then be computed in two ways depending on whether the region belonged to the MMZ or was part of the POSR. For the MMZ, the current calculations proceed in terms of basis functions with undetermined coefficients (as in the usual MM method), and the answer obtained by solving a system of linear equations. Over the POSR, conduction is obtained as a superposition of two contributions: (i) currents due to the incident magnetic field, and (ii) currents produced by the mutual induction from conduction within the MMZ. This effectively leads to a reduction in the size of linear equations from N to N - Npo with N being the total number of segments for the entire surface and Npo the number of segments over the POSR. The scheme would be appropriate for relatively large, flat surfaces, and at high frequencies. The ADI-FDTD scheme provides for both transient and steady state analyses. The restrictive Courant-Friedrich-Levy (CFL) condition on the time-step is removed, and so large time steps can be chosen even though the spatial grids are small. This report includes the problem definition, a detailed discussion of both the numerical techniques, and numerical implementations for simple surface geometries. Numerical solutions have been derived for a few simple situations.
Unveiling adaptation using high-resolution lineage tracking
NASA Astrophysics Data System (ADS)
Blundell, Jamie; Levy, Sasha; Fisher, Daniel; Petrov, Dmitri; Sherlock, Gavin
2013-03-01
Human diseases such as cancer and microbial infections are adaptive processes inside the human body with enormous population sizes: between 106 -1012 cells. In spite of this our understanding of adaptation in large populations is limited. The key problem is the difficulty in identifying anything more than a handful of rare, large-effect beneficial mutations. The development and use of molecular barcodes allows us to uniquely tag hundreds of thousands of cells and enable us to track tens of thousands of adaptive mutations in large yeast populations. We use this system to test some of the key theories on which our understanding of adaptation in large populations is based. We (i) measure the fitness distribution in an evolving population at different times, (ii) identify when an appreciable fraction of clones in the population have at most a single adaptive mutation and isolate a large number of clones with independent single adaptive mutations, and (iii) use this clone collection to determine the distribution of fitness effects of single beneficial mutations.
NASA Astrophysics Data System (ADS)
Johnson, D. J.; Needham, J.; Xu, C.; Davies, S. J.; Bunyavejchewin, S.; Giardina, C. P.; Condit, R.; Cordell, S.; Litton, C. M.; Hubbell, S.; Kassim, A. R. B.; Shawn, L. K. Y.; Nasardin, M. B.; Ong, P.; Ostertag, R.; Sack, L.; Tan, S. K. S.; Yap, S.; McDowell, N. G.; McMahon, S.
2016-12-01
Terrestrial carbon cycling is a function of the growth and survival of trees. Current model representations of tree growth and survival at a global scale rely on coarse plant functional traits that are parameterized very generally. In view of the large biodiversity in the tropical forests, it is important that we account for the functional diversity in order to better predict tropical forest responses to future climate changes. Several next generation Earth System Models are moving towards a size-structured, trait-based approach to modelling vegetation globally, but the challenge of which and how many traits are necessary to capture forest complexity remains. Additionally, the challenge of collecting sufficient trait data to describe the vast species richness of tropical forests is enormous. We propose a more fundamental approach to these problems by characterizing forests by their patterns of survival. We expect our approach to distill real-world tree survival into a reasonable number of functional types. Using 10 large-area tropical forest plots that span geographic, edaphic and climatic gradients, we model tree survival as a function of tree size for hundreds of species. We found surprisingly few categories of size-survival functions emerge. This indicates some fundamental strategies at play across diverse forests to constrain the range of possible size-survival functions. Initial cluster analysis indicates that four to eight functional forms are necessary to describe variation in size-survival relations. Temporal variation in size-survival functions can be related to local environmental variation, allowing us to parameterize how demographically similar groups of species respond to perturbations in the ecosystem. We believe this methodology will yield a synthetic approach to classifying forest systems that will greatly reduce uncertainty and complexity in global vegetation models.
Characterisation of RPLC columns packed with porous sub-2 microm particles.
Petersson, Patrik; Euerby, Melvin R
2007-08-01
Eight commercially available sub-2 microm octadecyl silane columns (C18 columns) have been characterised by the Tanaka protocol. The columns can be grouped into two groups that display large differences in selectivity and peak shape due to differences in hydrophobicity, degree of surface coverage and silanol activity. Measurements of particle size distributions were made using automated microscopy and electrical sensing zone measurements. Only a weak correlation could be found between efficiency and particle size. Large differences in column backpressure were observed. These differences are not related to particle size distribution. A more likely explanation is differences in packing density. In order to take full advantage of 100-150 mm columns packed with sub-2 microm particles, it is often necessary to employ not only an elevated pressure but also an elevated temperature. A comparison between columns packed with sub-2, 3 and 5 microm versions of the same packing indicates potential method transferability problems for several of the columns due to selectivity differences. Currently, the best alternative for fast high-resolution LC is the use of sub-2 microm particles in combination with elevated pressure and temperature. However, as shown in this study additional efforts are needed to improve transferability as well as column performance.
Testing Collisional Scaling Laws: Comparing with Observables
NASA Astrophysics Data System (ADS)
Davis, D. R.; Marzari, F.; Farinella, P.
1999-09-01
How large bodies break up in response to energetic collisions is a problem that has attracted considerable attention in recent years. Ever more sophisticated computation methods have also been developed; prominent among these are hydrocode simulations of collisional disruption by Benz and Asphaug (1999, Icarus, in press), Love and Ahrens (1996, LPSC XXVII, 777-778), and Melosh and Ryan (1997, Icarus 129, 562-564). Durda et al. (1998, Icarus 135, 431-440) used the observed asteroid size distribution to infer a scaling algorithm. The present situation is that there are several proposed scaling laws that differ by as much as two orders of magnitude at particular sizes. We have expanded upon the work of Davis et al. (1994, Goutelas Proceedings) and tested the suite of proposed scaling algorithms against observations of the main-belt asteroids. The effects of collisions among the asteroids produce the following observables: (a) the size distribution has been significantly shaped by collisions, (b) collisions have produced about 25 well recognized asteroid families, and (c) the basaltic crust of Vesta has been largely preserved in the face of about 4.5 Byr of impacts. We will present results from a numerical simulation of asteroid collisional evolution over the age of the solar system using proposed scaling laws and a range of hypothetical initial populations.
The Therapeutic Efficacy of Domestic Violence Victim Interventions.
Hackett, Shannon; McWhirter, Paula T; Lesher, Susan
2016-04-01
A meta-analysis on domestic violence interventions was conducted to determine overall effectiveness of mental health programs involving women and children in joint treatment. These interventions were further analyzed to determine whether outcomes are differentially affected based on the outcome measure employed. To date, no meta-analyses have been published on domestic violence victim intervention efficacy. The 17 investigations that met study criteria yielded findings indicating that domestic violence interventions have a large effect size (d = .812), which decreases to a medium effect size when compared to control groups (d = .518). Effect sizes were assessed to determine whether treatment differed according to the focus of the outcome measure employed: (a) external stress (behavioral problems, aggression, or alcohol use); (b) psychological adjustment (depression, anxiety, or happiness); (c) self-concept (self-esteem, perceived competence, or internal locus of control); (d) social adjustment (popularity, loneliness, or cooperativeness); (e) family relations (mother-child relations, affection, or quality of interaction); and (f) maltreatment events (reoccurrence of violence, return to partner). Results reveal that domestic violence interventions across all outcome categories yield effects in the medium to large range for both internalized and externalized symptomatology. Implications for greater awareness and support for domestic violence treatment and programming are discussed. © The Author(s) 2015.
Yeh, Wan-Yu; Yeh, Ching-Ying; Chen, Chiou-Jong
2018-05-15
Distinct differences exist between public-private sector organizations with respect to the market environment and operational objectives; furthermore, among private sector businesses, organizational structures and work conditions often vary between large- and small-sized companies. Despite these obvious structural distinctions, however, sectoral differences in employees' psychosocial risks and burnout status in national level have rarely been systematically investigated. Based on 2013 national employee survey data, 15,000 full-time employees were studied. Sector types were classified into "public," "private enterprise-large (LE)," and "private enterprise-small and medium (SME);" based on the definition of SMEs by Taiwan Ministry of Economic Affairs, and the associations of sector types with self-reported burnout status (measured by the Chinese version of Copenhagen Burnout Inventory) were examined, taking into account other work characteristics and job instability indicators. Significantly longer working hours and higher perceived job insecurity were found among private sector employees than their public sector counterparts. With further consideration of company size, greater dissatisfaction of job control and career prospect were found among SME employees than the other two sector type workers. This study explores the pattern of public-private differences in work conditions and employees' stress-related problems to have policy implications for supporting mechanism for disadvantaged workers in private sectors.
Ferguson, Robert J; Sigmon, Sandra T; Pritchard, Andrew J; LaBrie, Sharon L; Goetze, Rachel E; Fink, Christine M; Garrett, A Merrill
2016-06-01
Long-term chemotherapy-related cognitive dysfunction (CRCD) affects a large number of cancer survivors. To the authors' knowledge, to date there is no established treatment for this survivorship problem. The authors herein report results of a small randomized controlled trial of a cognitive behavioral therapy (CBT), Memory and Attention Adaptation Training (MAAT), compared with an attention control condition. Both treatments were delivered over a videoconference device. A total of 47 survivors of female breast cancer who reported CRCD were randomized to MAAT or supportive therapy and were assessed at baseline, after treatment, and at 2 months of follow-up. Participants completed self-report measures of cognitive symptoms and quality of life and a brief telephone-based neuropsychological assessment. MAAT participants made gains in perceived (self-reported) cognitive impairments (P = .02), and neuropsychological processing speed (P = .03) compared with supportive therapy controls. A large MAAT effect size was observed at the 2-month follow-up with regard to anxiety concerning cognitive problems (Cohen's d for standard differences in effect sizes, 0.90) with medium effects noted in general function, fatigue, and anxiety. Survivors rated MAAT and videoconference delivery with high satisfaction. MAAT may be an efficacious psychological treatment of CRCD that can be delivered through videoconference technology. This research is important because it helps to identify a treatment option for survivors that also may improve access to survivorship services. Cancer 2016;122:1782-91. © 2016 American Cancer Society. © 2016 American Cancer Society.
Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese
2014-01-01
Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Lu, Lu
There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less
NASA Astrophysics Data System (ADS)
Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang
2017-12-01
Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.
Efficient methods and readily customizable libraries for managing complexity of large networks.
Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can
2018-01-01
One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.
Stress relaxation of grouted entirely large diameter B-GFRP soil nail
NASA Astrophysics Data System (ADS)
Li, Guo-wei; Ni, Chun; Pei, Hua-fu; Ge, Wan-ming; Ng, Charles Wang Wai
2013-08-01
One of the potential solutions to steel-corrosion-related problems is the usage of fiber reinforced polymer (FRP) as a replacement of steel bars. In the past few decades, researchers have conducted a large number of experimental and theoretical studies on the behavior of small size glass fiber reinforce polymer (GFRP) bars (diameter smaller than 20 mm). However, the behavior of large size GFRP bar is still not well understood. Particularly, few studies were conducted on the stress relaxation of grouted entirely large diameter GFRP soil nail. This paper investigates the effect of stress levels on the relaxation behavior of GFRP soil nail under sustained deformation ranging from 30% to 60% of its ultimate strain. In order to study the behavior of stress relaxation, two B-GFRP soil nail element specimens were developed and instrumented with fiber Bragg grating (FBG) strain sensors which were used to measure strains along the B-GFRP bars. The test results reveal that the behavior of stress relaxation of B-GFRP soil nail element subjected to pre-stress is significantly related to the elapsed time and the initial stress of relaxation procedure. The newly proposed model for evaluating stress relaxation ratio can substantially reflect the influences of the nature of B-GFRP bar and the property of grip body. The strain on the nail body can be redistributed automatically. Modulus reduction is not the single reason for the stress degradation.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Visual analytics of anomaly detection in large data streams
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay
2009-01-01
Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.
NASA Astrophysics Data System (ADS)
Yang, Qingsong; Cong, Wenxiang; Wang, Ge
2016-10-01
X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change
NASA Astrophysics Data System (ADS)
Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro
This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.
Rodrigues, Nils; Weiskopf, Daniel
2018-01-01
Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.
The Anatomy of AP1000 Mono-Block Low Pressure Rotor Forging
NASA Astrophysics Data System (ADS)
Jin, Jia-yu; Rui, Shou-tai; Wang, Qun
AP1000 mono-block low pressure (LP) rotor forgings for nuclear power station have maximum ingot weight, maximum diameter and the highest technical requirements. It confronts many technical problems during manufacturing process such as composition segregation and control of inclusion in the large ingot, core compaction during forging, control of grain size and mechanical performance. The rotor forging were anatomized to evaluate the manufacturing level of CFHI. This article introduces the anatomical results of this forging. The contents include chemical composition, mechanical properties, inclusions and grain size and other aspects from the full-length and full cross-section of this forging. The fluctuation of mechanical properties, uniformity of microstructure and purity of chemical composition were emphasized. The results show that the overall performance of this rotor forging is particularly satisfying.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter
2016-07-26
This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less
Complex Population Dynamics and the Coalescent Under Neutrality
Volz, Erik M.
2012-01-01
Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576
A half-baked solution: drivers of water crises in Mexico
NASA Astrophysics Data System (ADS)
Godinez Madrigal, Jonatan; van der Zaag, Pieter; van Cauwenbergh, Nora
2018-02-01
Mexico is considered a regional economic and political powerhouse because of the size of its economy, and a large population in constant growth. However, this same growth accompanied by management and governance failures are causing several water crises across the country. The paper aims at identifying and analyzing the drivers of water crises. Water authorities seem to focus solely on large infrastructural schemes to counter the looming water crises, but fail to structure a set of policies for the improvement of management and governance institutions. The paper concludes with the implications of a business-as-usual policy based on infrastructure for solving water problems, which include a non-compliance to the human right to water and sanitation, ecosystem collapses and water conflicts.
Isolation of organic acids from large volumes of water by adsorption on macroporous resins
Aiken, George R.; Suffet, I.H.; Malaiyandi, Murugan
1987-01-01
Adsorption on synthetic macroporous resins, such as the Amberlite XAD series and Duolite A-7, is routinely used to isolate and concentrate organic acids from forge volumes of water. Samples as large as 24,500 L have been processed on site by using these resins. Two established extraction schemes using XAD-8 and Duolite A-7 resins are described. The choice of the appropriate resin and extraction scheme is dependent on the organic solutes of interest. The factors that affect resin performance, selectivity, and capacity for a particular solute are solution pH, resin surface area and pore size, and resin composition. The logistical problems of sample handling, filtration, and preservation are also discussed.
Pill Properties that Cause Dysphagia and Treatment Failure
Fields, Jeremy; Go, Jorge T.; Schulze, Konrad S.
2015-01-01
Background Pills (tablets and capsules) are widely used to administer prescription drugs or to take supplements such as vitamins. Unfortunately, little is known about how much effort it takes Americans to swallow these various pills. More specifically, it is not known to what extent hard-to-swallow pills might affect treatment outcomes (eg, interfering with adherence to prescribed medications or causing clinical complications). It is also unclear which properties (eg, size, shape, or surface texture) Americans prefer or reject for their pills. To learn more about these issues, we interviewed a small group of individuals. Methods We invited individuals in waiting rooms of our tertiary health care center to participate in structured interviews about their pill-taking habits and any problems they have swallowing pills. We inquired which pill properties they believed caused swallowing problems. Participants scored capsules and pills of representative size, shape, and texture for swallowing effort and reported their personal preferences. Results Of 100 successive individuals, 99 participants completed the interview (65% women, mean age = 41 years, range = 23-77 years). Eighty-three percent took pills daily (mean 4 pills/d; 56% of those pills were prescribed by providers). Fifty-four percent of participants replied yes to the question, "Did you ever have to swallow a solid medication that was too difficult?" Four percent recounted serious complications: 1% pill esophagitis, 1% pill impaction, and 2% stopped treatments (antibiotic and prenatal supplement) because they could not swallow the prescribed pills. Half of all participants routinely resorted to special techniques (eg, plenty of liquids or repeated or forceful swallows). Sixty-one percent of those having difficulties cited specific pill properties: 27% blamed size (20% of problems were caused by pills that were too large whereas 7% complained about pills that were too small to sense); 12% faulted rough surface texture; others cited sharp edges, odd shapes, or bad taste/smell. Extra-large pills were widely loathed, with 4 out of 5 participants preferring to take 3 or more medium-sized pills instead of a single jumbo pill. Conclusions Our survey results suggest that 4 out of 5 adult Americans take several pills daily, and do so without undue effort. It also suggests that half of today’s Americans encounter pills that are hard to swallow. Up to 4% of our participants gave up on treatments because they could not swallow the prescribed pills. Up to 7% categorically rejected taking pills that are hard to swallow. Specific material properties are widely blamed for making pills hard to swallow; extra-large capsules and tablets are universally feared, whereas medium-sized pills with a smooth coating are widely preferred. Our findings suggest that health care providers could minimize treatment failures and complications by prescribing and dispensing pills that are easy to swallow. Industry and regulatory bodies may facilitate this by making swallowability an essential criterion in the design and licensing of oral medications. Such policies could lessen the burden of pill taking for Americans and improve the adherence with prescribed treatments. PMID:26543509
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Venusian tectonics: Convective coupling to the lithosphere?
NASA Technical Reports Server (NTRS)
Phillips, R. J.
1987-01-01
The relationship between the dominant global heat loss mechanism and planetary size has motivated the search for tectonic style on Venus. Prior to the American and Soviet mapping missions of the past eight years, it was thought that terrestrial style plate tectonics was operative on Venus because this planet is approximately the size of the Earth and is conjectured to have about the same heat source content per unit mass. However, surface topography mapped by the altimeter of the Pioneer Venus spacecraft did not show any physiographic expression of terrestrial style spreading ridges, trenches, volcanic arcs or transform faults, although the horizontal resolution was questionable for detection of at least some of these features. The Venera 15 and 16 radar missions mapped the northern latitudes of Venus at 1 to 2 km resolution and showed that there are significant geographic areas of deformation seemingly created by large horizontal stresses. These same high resolution images show no evidence for plate tectonic features. Thus a fundamental problem for venusian tectonics is the origin of large horizontal stresses near the surface in the apparent absence of plate tectonics.
Size-dependent reactivity of diamond nanoparticles.
Williams, Oliver A; Hees, Jakob; Dieker, Christel; Jäger, Wolfgang; Kirste, Lutz; Nebel, Christoph E
2010-08-24
Photonic active diamond nanoparticles attract increasing attention from a wide community for applications in drug delivery and monitoring experiments as they do not bleach or blink over extended periods of time. To be utilized, the size of these diamond nanoparticles needs to be around 4 nm. Cluster formation is therefore the major problem. In this paper we introduce a new technique to modify the surface of particles with hydrogen, which prevents cluster formation in buffer solution and which is a perfect starting condition for chemical surface modifications. By annealing aggregated nanodiamond powder in hydrogen gas, the large (>100 nm) aggregates are broken down into their core ( approximately 4 nm) particles. Dispersion of these particles into water via high power ultrasound and high speed centrifugation, results in a monodisperse nanodiamond colloid, with exceptional long time stability in a wide range of pH, and with high positive zeta potential (>60 mV). The large change in zeta potential resulting from this gas treatment demonstrates that nanodiamond particle surfaces are able to react with molecular hydrogen at relatively low temperatures, a phenomenon not witnessed with larger (20 nm) diamond particles or bulk diamond surfaces.
Hurl, Kylee; Wightman, Jade; Haynes, Stephen N; Virues-Ortega, Javier
2016-07-01
This study examined the relative effectiveness of interventions based on a pre-intervention functional behavioral assessment (FBA), compared to interventions not based on a pre-intervention FBA. We examined 19 studies that included a direct comparison between the effects of FBA- and non-FBA-based interventions with the same participants. A random effects meta-analysis of effect sizes indicated that FBA-based interventions were associated with large reductions in problem behaviors when using non-FBA-based interventions as a reference intervention (Effect size=0.85, 95% CI [0.42, 1.27], p<0.001). In addition, non-FBA based interventions had no effect on problem behavior when compared to no intervention (0.06, 95% CI [-0.21, 0.33], p=0.664). Interestingly, both FBA-based and non-FBA-based interventions had significant effects on appropriate behavior relative to no intervention, albeit the overall effect size was much larger for FBA-based interventions (FBA-based: 1.27, 95% CI [0.89, 1.66], p<0.001 vs. non-FBA-based: 0.35, 95% CI [0.14, 0.56], p=0.001). In spite of the evidence in favor of FBA-based interventions, the limited number of comparative studies with high methodological standards underlines the need for further comparisons of FBA-based versus non-FBA-based interventions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Filtering analysis of a direct numerical simulation of the turbulent Rayleigh-Benard problem
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Hussaini, M. Y.; Zang, T. A.
1990-01-01
A filtering analysis of a turbulent flow was developed which provides details of the path of the kinetic energy of the flow from its creation via thermal production to its dissipation. A low-pass spatial filter is used to split the velocity and the temperature field into a filtered component (composed mainly of scales larger than a specific size, nominally the filter width) and a fluctuation component (scales smaller than a specific size). Variables derived from these fields can fall into one of the above two ranges or be composed of a mixture of scales dominated by scales near the specific size. The filter is used to split the kinetic energy equation into three equations corresponding to the three scale ranges described above. The data from a direct simulation of the Rayleigh-Benard problem for conditions where the flow is turbulent are used to calculate the individual terms in the three kinetic energy equations. This is done for a range of filter widths. These results are used to study the spatial location and the scale range of the thermal energy production, the cascading of kinetic energy, the diffusion of kinetic energy, and the energy dissipation. These results are used to evaluate two subgrid models typically used in large-eddy simulations of turbulence. Subgrid models attempt to model the energy below the filter width that is removed by a low-pass filter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Li, Weixuan; Zeng, Lingzao
2016-06-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less
Reliable Cellular Automata with Self-Organization
NASA Astrophysics Data System (ADS)
Gács, Peter
2001-04-01
In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3-dimensional discrete-time universal fault-tolerant cellular automaton. This technique does not help much to solve the following problems: remembering a bit of information in 1 dimension; computing in dimensions lower than 3; computing in any dimension with non-synchronized transitions. Our more complex technique organizes the cells in blocks that perform a reliable simulation of a second (generalized) cellular automaton. The cells of the latter automaton are also organized in blocks, simulating even more reliably a third automaton, etc. Since all this (a possibly infinite hierarchy) is organized in "software," it must be under repair all the time from damage caused by errors. A large part of the problem is essentially self-stabilization recovering from a mess of arbitrary size and content. The present paper constructs an asynchronous one-dimensional fault-tolerant cellular automaton, with the further feature of "self-organization." The latter means that unless a large amount of input information must be given, the initial configuration can be chosen homogeneous.
Gumerov, Nail A; Duraiswami, Ramani
2009-01-01
The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations.
LAVH for large uteri by various strategies.
Chang, Wen-Chun; Huang, Su-Cheng; Sheu, Bor-Ching; Torng, Pao-Ling; Hsu, Wen-Chiung; Chen, Szu-Yu; Chang, Daw-Yuan
2008-01-01
To study if there are specific problems in laparoscopically assisted vaginal hysterectomy (LAVH) for a certain weight of bulky uteri and the strategies to overcome such problems. One hundred and eighty-one women with myoma or adenomyosis, weighing 350-1,590 g, underwent LAVH between August 2002 and December 2005. Key surgical strategies were special sites for trocar insertion, uterine artery or adnexal collateral pre-ligation, laparoscopic and transvaginal volume reduction technique. The basic clinical and operative parameters were recorded for analysis. Based on significant differences in the operative time and estimated blood loss, the patients were divided into medium uteri weighing 350-749 g, n=138 (76%), and large uteri weighing > or =750 g, n=43 (24%). There was no significant difference in terms of age, body mass index, preoperative diagnoses, complications and duration of hospital stay among groups. The operative time and estimated blood loss increased with larger uterine size (p<0.001). The operative time (196+/-53, 115-395 min), estimated blood loss (234+/-200, 50-1,000 ml) and frequency of excessive bleeding (14%) or transfusion (5%) were significantly greater, but in acceptable ranges, for those with large uteri. Conversion to laparotomy was required in a patient (2%) with a large uterus, and the overall conversion rate was 0.6%. There was no re-operation or surgical mortality. Using various combinations of special strategies, most experienced gynecologic surgeons can conduct LAVH for most large uteri with minimal rates of complications and conversion to laparotomy.
Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.
2014-12-01
Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.
Adhapure, N.N.; Dhakephalkar, P.K.; Dhakephalkar, A.P.; Tembhurkar, V.R.; Rajgure, A.V.; Deshmukh, A.M.
2014-01-01
Very recently bioleaching has been used for removing metals from electronic waste. Most of the research has been targeted to using pulverized PCBs for bioleaching where precipitate formed during bioleaching contaminates the pulverized PCB sample and making the overall metal recovery process more complicated. In addition to that, such mixing of pulverized sample with precipitate also creates problems for the final separation of non metallic fraction of PCB sample. In the present investigation we attempted the use of large pieces of printed circuit boards instead of pulverized sample for removal of metals. Use of large pieces of PCBs for bioleaching was restricted due to the chemical coating present on PCBs, the problem has been solved by chemical treatment of PCBs prior to bioleaching. In short,•Large pieces of PCB can be used for bioleaching instead of pulverized PCB sample.•Metallic portion on PCBs can be made accessible to bacteria with prior chemical treatment of PCBs.•Complete metal removal obtained on PCB pieces of size 4 cm × 2.5 cm with the exception of solder traces. The final metal free PCBs (non metallic) can be easily recycled and in this way the overall recycling process (metallic and non metallic part) of PCBs becomes simple. PMID:26150951
The size-line width relation and the mass of molecular hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Issa, M.; Maclaren, I.; Wolfendale, A. W.
Some difficulties associated with the problem of cloud definition are considered, with particular regard to the crowded distribution of clouds and the difficulty of choosing an appropriate boundary in such circumstances. A number of tests carried out on the original data suggest that the delta(v) - S relation found by Solomon et al. (1987) is not a genuine reflection of the dynamical state of Giant Molecular Clouds. The Solomon et al. parameters, are insensitive to the actual cloud properties and are unable to distinguish true clouds from the consequences of sampling any crowded region of emission down to a lowmore » threshold temperature. The overall effect of such problems is to overestimate both the masses of Giant Molecular Clouds and the number of very large clouds. 24 refs.« less
Vectorized program architectures for supercomputer-aided circuit design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzoli, V.; Ferlito, M.; Neri, A.
1986-01-01
Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less
Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper
2016-11-01
The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.
Pollution source localization in an urban water supply network based on dynamic water demand.
Yan, Xuesong; Zhu, Zhixin; Li, Tian
2017-10-27
Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.
The crisis of urbanization in Asia: finding alternatives to megalopolitan growth.
Rondinelli, D A
1985-01-01
The rapid expansion of large Asian cities generates serious social, economic, and physical problems, and has thereby forced these areas to create alternative expansion plans, such as the idea of building up secondary cities and towns. The result of the rapid expansion of large cities, combined with poor urban management, accentuates the mass poverty in many Asian cities. This large urban population is expected to double or triple in size between 1970 and 2000. Because substantial resources are required to manage these megalopolitan areas, it is reasonable to deduce that millions of these city dwellers will be living in absolute poverty by 2000. It is the prospect of continued rapid growth over the next 2 decades that presents the most serious problem for Asian countries. Most metropolises cannot provide enough jobs for the current work force. In addition, public facilities, housing, transportation, and health services are examples of other problems threatened by a heavy concentration of people. Attempts to control this growth have been unsuccessful, mainly due to the 1950s and 1960s emphasis on productive investment, which left rural regions underdeveloped and poor. Secondary cities and regional centers in Asia perform important functions in promoting widespread economic and social development: 1) they stimulate rural economies and therefore establish a pattern of step-wise migration, and 2) they absorb population and therefore, relieve some of the pressure put on the largest metropolitan areas. Studies of secondary cities and their attempts at controlling growth of large metropolitan centers suggest broad guidelines for strategies. Some of these are: 1) the existence of large metropolises has little effect on the growth of primate cities; 2) few controls on growth of large areas are likely to be effective unless there are viable alternative locations at which high threshold economic activities can operate; 3) secondary cities must be closely related to the agricultural economies of their rural hinterlands; and 4) attention must be given to improving transportation and other communication between large metropolitan centers, secondary cities, and smaller cities and towns. The continued concentration of people and economic activities in vast megalopolitan areas will continue to generate serious economic and social problems that may help stimulate the evolution of some of these strategies.
NASA Astrophysics Data System (ADS)
Huang, Tsung-Ming; Lin, Wen-Wei; Tian, Heng; Chen, Guan-Hua
2018-03-01
Full spectrum of a large sparse ⊤-palindromic quadratic eigenvalue problem (⊤-PQEP) is considered arguably for the first time in this article. Such a problem is posed by calculation of surface Green's functions (SGFs) of mesoscopic transistors with a tremendous non-periodic cross-section. For this problem, general purpose eigensolvers are not efficient, nor is advisable to resort to the decimation method etc. to obtain the Wiener-Hopf factorization. After reviewing some rigorous understanding of SGF calculation from the perspective of ⊤-PQEP and nonlinear matrix equation, we present our new approach to this problem. In a nutshell, the unit disk where the spectrum of interest lies is broken down adaptively into pieces small enough that they each can be locally tackled by the generalized ⊤-skew-Hamiltonian implicitly restarted shift-and-invert Arnoldi (G⊤SHIRA) algorithm with suitable shifts and other parameters, and the eigenvalues missed by this divide-and-conquer strategy can be recovered thanks to the accurate estimation provided by our newly developed scheme. Notably the novel non-equivalence deflation is proposed to avoid as much as possible duplication of nearby known eigenvalues when a new shift of G⊤SHIRA is determined. We demonstrate our new approach by calculating the SGF of a realistic nanowire whose unit cell is described by a matrix of size 4000 × 4000 at the density functional tight binding level, corresponding to a 8 × 8nm2 cross-section. We believe that quantum transport simulation of realistic nano-devices in the mesoscopic regime will greatly benefit from this work.
Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less
Can amorphization take place in nanoscale interconnects?
Kumar, S; Joshi, K L; van Duin, A C T; Haque, M A
2012-03-09
The trend of miniaturization has highlighted the problems of heat dissipation and electromigration in nanoelectronic device interconnects, but not amorphization. While amorphization is known to be a high pressure and/or temperature phenomenon, we argue that defect density is the key factor, while temperature and pressure are only the means. For nanoscale interconnects carrying modest current density, large vacancy concentrations may be generated without the necessity of high temperature or pressure due to the large fraction of grain boundaries and triple points. To investigate this hypothesis, we performed in situ transmission electron microscope (TEM) experiments on 200 nm thick (80 nm average grain size) aluminum specimens. Electron diffraction patterns indicate partial amorphization at modest current density of about 10(5) A cm(-2), which is too low to trigger electromigration. Since amorphization results in drastic decrease in mechanical ductility as well as electrical and thermal conductivity, further increase in current density to about 7 × 10(5) A cm(-2) resulted in brittle fracture failure. Our molecular dynamics (MD) simulations predict the formation of amorphous regions in response to large mechanical stresses (due to nanoscale grain size) and excess vacancies at the cathode side of the thin films. The findings of this study suggest that amorphization can precede electromigration and thereby play a vital role in the reliability of micro/nanoelectronic devices.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Adhikari, Kiran; Otaki, Joji M
2016-02-01
It is often desirable but difficult to retrieve information on the mature phenotype of an immature tissue sample that has been subjected to gene expression analysis. This problem cannot be ignored when individual variation within a species is large. To circumvent this problem in the butterfly wing system, we developed a new surgical method for removing a single forewing from a pupa using Junonia orithya; the operated pupa was left to develop to an adult without eclosion. The removed right forewing was subjected to gene expression analysis, whereas the non-removed left forewing was examined for color patterns. As a test case, we focused on Distal-less (Dll), which likely plays an active role in inducing elemental patterns, including eyespots. The Dll expression level in forewings was paired with eyespot size data from the same individual. One third of the operated pupae survived and developed wing color patterns. Dll expression levels were significantly higher in males than in females, although male eyespots were smaller in size than female eyespots. Eyespot size data showed weak but significant correlations with the Dll expression level in females. These results demonstrate that a single-wing removal method was successfully applied to the butterfly wing system and suggest the weak and non-exclusive contribution of Dll to eyespot size determination in this butterfly. Our novel methodology for establishing correspondence between gene expression and phenotype can be applied to other candidate genes for color pattern development in butterflies. Conceptually similar methods may also be applicable in other developmental systems.
An efficient quantum scheme for Private Set Intersection
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.
Validation of the Transient Structural Response of a Threaded Assembly: Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.
2004-04-01
This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.
High Accuracy, Two-Dimensional Read-Out in Multiwire Proportional Chambers
DOE R&D Accomplishments Database
Charpak, G.; Sauli, F.
1973-02-14
In most applications of proportional chambers, especially in high-energy physics, separate chambers are used for measuring different coordinates. In general one coordinate is obtained by recording the pulses from the anode wires around which avalanches have grown. Several methods have been imagined for obtaining the position of an avalanche along a wire. In this article a method is proposed which leads to the same range of accuracies and may be preferred in some cases. The problem of accurate measurements for large-size chamber is also discussed.
Y-MP floating point and Cholesky factorization
NASA Technical Reports Server (NTRS)
Carter, Russell
1991-01-01
The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
Atomic-scale epitaxial aluminum film on GaAs substrate
NASA Astrophysics Data System (ADS)
Fan, Yen-Ting; Lo, Ming-Cheng; Wu, Chu-Chun; Chen, Peng-Yu; Wu, Jenq-Shinn; Liang, Chi-Te; Lin, Sheng-Di
2017-07-01
Atomic-scale metal films exhibit intriguing size-dependent film stability, electrical conductivity, superconductivity, and chemical reactivity. With advancing methods for preparing ultra-thin and atomically smooth metal films, clear evidences of the quantum size effect have been experimentally collected in the past two decades. However, with the problems of small-area fabrication, film oxidation in air, and highly-sensitive interfaces between the metal, substrate, and capping layer, the uses of the quantized metallic films for further ex-situ investigations and applications have been seriously limited. To this end, we develop a large-area fabrication method for continuous atomic-scale aluminum film. The self-limited oxidation of aluminum protects and quantizes the metallic film and enables ex-situ characterizations and device processing in air. Structure analysis and electrical measurements on the prepared films imply the quantum size effect in the atomic-scale aluminum film. Our work opens the way for further physics studies and device applications using the quantized electronic states in metals.
Hemani, H; Warrier, M; Sakthivel, N; Chaturvedi, S
2014-05-01
Molecular dynamics (MD) simulations are used in the study of void nucleation and growth in crystals that are subjected to tensile deformation. These simulations are run for typically several hundred thousand time steps depending on the problem. We output the atom positions at a required frequency for post processing to determine the void nucleation, growth and coalescence due to tensile deformation. The simulation volume is broken up into voxels of size equal to the unit cell size of crystal. In this paper, we present the algorithm to identify the empty unit cells (voids), their connections (void size) and dynamic changes (growth and coalescence of voids) for MD simulations of large atomic systems (multi-million atoms). We discuss the parallel algorithms that were implemented and discuss their relative applicability in terms of their speedup and scalability. We also present the results on scalability of our algorithm when it is incorporated into MD software LAMMPS. Copyright © 2014 Elsevier Inc. All rights reserved.
Acoustic measurement of bubble size and position in a piezo driven inkjet printhead
NASA Astrophysics Data System (ADS)
van der Bos, Arjan; Jeurissen, Roger; de Jong, Jos; Stevens, Richard; Versluis, Michel; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; Lohse, Detlef
2008-11-01
A bubble can be entrained in the ink channel of a piezo-driven inkjet printhead, where it grows by rectified diffusion. If large enough, the bubble counteracts the pressure buildup at the nozzle, resulting in nozzle failure. Here an acoustic sizing method for the volume and position of the bubble is presented. The bubble response is detected by the piezo actuator itself, operating in a sensor mode. The method used to determine the volume and position of the bubble is based on a linear model in which the interaction between the bubble and the channel are included. This model predicts the acoustic signal for a given position and volume of the bubble. The inverse problem is to infer the position and volume of the bubble from the measured acoustic signal. By solving it, we can thus acoustically measure size and position of the bubble. The validity of the presented method is supported by time-resolved optical observations of the dynamics of the bubble within an optically accessible ink-jet channel.
Stability-to-instability transition in the structure of large-scale networks
NASA Astrophysics Data System (ADS)
Hu, Dandan; Ronhovde, Peter; Nussinov, Zohar
2012-12-01
We examine phase transitions between the “easy,” “hard,” and “unsolvable” phases when attempting to identify structure in large complex networks (“community detection”) in the presence of disorder induced by network “noise” (spurious links that obscure structure), heat bath temperature T, and system size N. The partition of a graph into q optimally disjoint subgraphs or “communities” inherently requires Potts-type variables. In earlier work [Philos. Mag.1478-643510.1080/14786435.2011.616547 92, 406 (2012)], when examining power law and other networks (and general associated Potts models), we illustrated that transitions in the computational complexity of the community detection problem typically correspond to spin-glass-type transitions (and transitions to chaotic dynamics in mechanical analogs) at both high and low temperatures and/or noise. The computationally “hard” phase exhibits spin-glass type behavior including memory effects. The region over which the hard phase extends in the noise and temperature phase diagram decreases as N increases while holding the average number of nodes per community fixed. This suggests that in the thermodynamic limit a direct sharp transition may occur between the easy and unsolvable phases. When present, transitions at low temperature or low noise correspond to entropy driven (or “order by disorder”) annealing effects, wherein stability may initially increase as temperature or noise is increased before becoming unsolvable at sufficiently high temperature or noise. Additional transitions between contending viable solutions (such as those at different natural scales) are also possible. Identifying community structure via a dynamical approach where “chaotic-type” transitions were found earlier. The correspondence between the spin-glass-type complexity transitions and transitions into chaos in dynamical analogs might extend to other hard computational problems. In this work, we examine large networks (with a power law distribution in cluster size) that have a large number of communities (q≫1). We infer that large systems at a constant ratio of q to the number of nodes N asymptotically tend towards insolvability in the limit of large N for any positive T. The asymptotic behavior of temperatures below which structure identification might be possible, T×=O[1/lnq], decreases slowly, so for practical system sizes, there remains an accessible, and generally easy, global solvable phase at low temperature. We further employ multivariate Tutte polynomials to show that increasing q emulates increasing T for a general Potts model, leading to a similar stability region at low T. Given the relation between Tutte and Jones polynomials, our results further suggest a link between the above complexity transitions and transitions associated with random knots.
Expert and novice categorization of introductory physics problems
NASA Astrophysics Data System (ADS)
Wolf, Steven Frederick
Since it was first published 30 years ago, Chi et al.'s seminal paper on expert and novice categorization of introductory problems led to a plethora of follow-up studies within and outside of the area of physics [Chi et al. Cognitive Science 5, 121 -- 152 (1981)]. These studies frequently encompass "card-sorting" exercises whereby the participants group problems. The study firmly established the paradigm that novices categorize physics problems by "surface features" (e.g. "incline," "pendulum," "projectile motion,"... ), while experts use "deep structure" (e.g. "energy conservation," "Newton 2,"... ). While this technique certainly allows insights into problem solving approaches, simple descriptive statistics more often than not fail to find significant differences between experts and novices. In most experiments, the clean-cut outcome of the original study cannot be reproduced. Given the widespread implications of the original study, the frequent failure to reproduce its findings warrants a closer look. We developed a less subjective statistical analysis method for the card sorting outcome and studied how the "successful" outcome of the experiment depends on the choice of the original card set. Thus, in a first step, we are moving beyond descriptive statistics, and develop a novel microscopic approach that takes into account the individual identity of the cards and uses graph theory and models to visualize, analyze, and interpret problem categorization experiments. These graphs are compared macroscopically, using standard graph theoretic statistics, and microscopically, using a distance metric that we have developed. This macroscopic sorting behavior is described using our Cognitive Categorization Model. The microscopic comparison allows us to visualize our sorters using Principal Components Analysis and compare the expert sorters to the novice sorters as a group. In the second step, we ask the question: Which properties of problems are most important in problem sets that discriminate experts from novices in a measurable way? We are describing a method to characterize problems along several dimensions, and then study the effectiveness of differently composed problem sets in differentiating experts from novices, using our analysis method. Both components of our study are based on an extensive experiment using a large problem set, which known physics experts and novices categorized according to the original experimental protocol. Both the size of the card set and the size of the sorter pool were larger than in comparable experiments. Based on our analysis method, we find that most of the variation in sorting outcome is not due to the sorter being an expert versus a novice, but rather due to an independent characteristic that we named "stacker" versus "spreader." The fact that the expert-novice distinction only accounts for a smaller amount of the variation may partly explain the frequent null-results when conducting these experiments. In order to study how the outcome depends on the original problem set, our problem set needed to be large so that we could determine how well experts and novices could be discriminated by considering both small subsets using a Monte Carlo approach and larger subsets using Simulated Annealing. This computationally intense study relied on our objective analysis method, as the large combinatorics did not allow for manual analysis of the outcomes from the subsets. We found that the number of questions required to accurately classify experts and novices could be surprisingly small so long as the problem set was carefully crafted to be composed of problems with particular pedagogical and contextual features. In order to discriminate experts from novices in a categorization task, it is important that the problem sets carefully consider three problem properties: The chapters that problems are in (the problems need to be from a wide spectrum of chapters to allow for the original "deep structure" categorization), the processes required to solve the problems (the problems must required different solving strategies), and the difficulty of the problems (the problems must be "easy"). In other words, for the experiment to be "successful," the card set needs to be carefully "rigged" across three property dimensions.
Study on Warm Forging Prosess of 45 Steel Asymmetric Gear
NASA Astrophysics Data System (ADS)
Qi, Yushi; Du, Zhiming; Sun, Hongsheng; Chen, Lihua; Wang, Changshun
2017-09-01
Asymmetric gear has complex structure, so using plastic forming technology to process the gear has problems of large forming load, short die life, bad tooth filling, and so on. To solve these problems, this paper presents a radial warm extrusion process of asymmetric gear to reduce the forming load and improve the filling in the toothed corner portion. Using the new mold and No. 45 steel to conducting forming experiments under the optimal forming parameters: billet temperature is 800°C, mold temperature is 250°C, the forming speed is 30mm/s, and the friction coefficient is 0.15, we can obtain the complete asymmetric gear with better surface and tooth filling. Asymmetric gears’ microstructure analysis and mechanical testing showed that the small grain evenly distributed in the region near the addendum circle with high strength; the area near the central portion of the gear had a coarse grain size, uneven distribution and low strength. Significant metal flow lines at the corner part of the gear indicated that a large number of late-forming metal flowed into the tooth cavity filling the corner portion.
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Detection and localization of building insulation faults using optical-fiber DTS system
NASA Astrophysics Data System (ADS)
Papes, Martin; Liner, Andrej; Koudelka, Petr; Siska, Petr; Cubik, Jakub; Kepak, Stanislav; Jaros, Jakub; Vasinek, Vladimir
2013-05-01
Nowadays the trends in the construction industry are changing at an incredible speed. The new technologies are still emerging on the market. Sphere of building insulation is not an exception as well. One of the major problems in building insulation is usually its failure, whether caused by unwanted mechanical intervention or improper installation. The localization of these faults is quite difficult, often impossible without large intervention into the construction. As a proper solution for this problem might be utilization of Optical-Fiber DTS system based on stimulated Raman scattering. Used DTS system is primary designed for continuous measurement of the temperature along the optical fiber. This system is using standard optical fiber as a sensor, which brings several advantages in its application. First, the optical fiber is relatively inexpensive, which allows to cover a quite large area for a small cost. The other main advantages of the optical fiber are electromagnetic resistance, small size, safety operation in inflammable or explosive area, easy installation, etc. This article is dealing with the detection and localization of building insulation faults using mentioned system.
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps
NASA Astrophysics Data System (ADS)
Xu, Xiwei; Zhang, Changhai
2013-12-01
Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.
Three-dimensional imaging of buried objects in very lossy earth by inversion of VETEM data
Cui, T.J.; Aydiner, A.A.; Chew, W.C.; Wright, D.L.; Smith, D.V.
2003-01-01
The very early time electromagnetic system (VETEM) is an efficient tool for the detection of buried objects in very lossy earth, which allows a deeper penetration depth compared to the ground-penetrating radar. In this paper, the inversion of VETEM data is investigated using three-dimensional (3-D) inverse scattering techniques, where multiple frequencies are applied in the frequency range from 0-5 MHz. For small and moderately sized problems, the Born approximation and/or the Born iterative method have been used with the aid of the singular value decomposition and/or the conjugate gradient method in solving the linearized integral equations. For large-scale problems, a localized 3-D inversion method based on the Born approximation has been proposed for the inversion of VETEM data over a large measurement domain. Ways to process and to calibrate the experimental VETEM data are discussed to capture the real physics of buried objects. Reconstruction examples using synthesized VETEM data and real-world VETEM data are given to test the validity and efficiency of the proposed approach.
Joint optimization of green vehicle scheduling and routing problem with time-varying speeds
Zhang, Dezhi; Wang, Xin; Ni, Nan; Zhang, Zhuo
2018-01-01
Based on an analysis of the congestion effect and changes in the speed of vehicle flow during morning and evening peaks in a large- or medium-sized city, the piecewise function is used to capture the rules of the time-varying speed of vehicles, which are very important in modelling their fuel consumption and CO2 emission. A joint optimization model of the green vehicle scheduling and routing problem with time-varying speeds is presented in this study. Extra wages during nonworking periods and soft time-window constraints are considered. A heuristic algorithm based on the adaptive large neighborhood search algorithm is also presented. Finally, a numerical simulation example is provided to illustrate the optimization model and its algorithm. Results show that, (1) the shortest route is not necessarily the route that consumes the least energy, (2) the departure time influences the vehicle fuel consumption and CO2 emissions and the optimal departure time saves on fuel consumption and reduces CO2 emissions by up to 5.4%, and (3) extra driver wages have significant effects on routing and departure time slot decisions. PMID:29466370
Joint optimization of green vehicle scheduling and routing problem with time-varying speeds.
Zhang, Dezhi; Wang, Xin; Li, Shuangyan; Ni, Nan; Zhang, Zhuo
2018-01-01
Based on an analysis of the congestion effect and changes in the speed of vehicle flow during morning and evening peaks in a large- or medium-sized city, the piecewise function is used to capture the rules of the time-varying speed of vehicles, which are very important in modelling their fuel consumption and CO2 emission. A joint optimization model of the green vehicle scheduling and routing problem with time-varying speeds is presented in this study. Extra wages during nonworking periods and soft time-window constraints are considered. A heuristic algorithm based on the adaptive large neighborhood search algorithm is also presented. Finally, a numerical simulation example is provided to illustrate the optimization model and its algorithm. Results show that, (1) the shortest route is not necessarily the route that consumes the least energy, (2) the departure time influences the vehicle fuel consumption and CO2 emissions and the optimal departure time saves on fuel consumption and reduces CO2 emissions by up to 5.4%, and (3) extra driver wages have significant effects on routing and departure time slot decisions.
Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H
2014-05-28
Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.
The Triggering of Large-Scale Waves by CME Initiation
NASA Astrophysics Data System (ADS)
Forbes, Terry
Studies of the large-scale waves generated at the onset of a coronal mass ejection (CME) can provide important information about the processes in the corona that trigger and drive CMEs. The size of the region where the waves originate can indicate the location of the magnetic forces that drive the CME outward, and the rate at which compressive waves steepen into shocks can provide a measure of how the driving forces develop in time. However, in practice it is difficult to separate the effects of wave formation from wave propagation. The problem is particularly acute for the corona because of the multiplicity of wave modes (e.g. slow versus fast MHD waves) and the highly nonuniform structure of the solar atmosphere. At the present time large-scale numerical simulations provide the best hope for deconvolving wave propagation and formation effects from one another.
Full-field Strain Methods for Investigating Failure Mechanisms in Triaxial Braided Composites
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Goldberg, Robert K.; Roberts, Gary D.
2008-01-01
Composite materials made with triaxial braid architecture and large tow size carbon fibers are beginning to be used in many applications, including composite aircraft and engine structures. Recent advancements in braiding technology have led to commercially viable manufacturing approaches for making large structures with complex shape. Although the large unit cell size of these materials is an advantage for manufacturing efficiency, the fiber architecture presents some challenges for materials characterization, design, and analysis. In some cases, the static load capability of structures made using these materials has been higher than expected based on material strength properties measured using standard coupon tests. A potential problem with using standard tests methods for these materials is that the unit cell size can be an unacceptably large fraction of the specimen dimensions. More detailed investigation of deformation and failure processes in large unit cell size triaxial braid composites is needed to evaluate the applicability of standard test methods for these materials and to develop alternative testing approaches. In recent years, commercial equipment has become available that enables digital image correlation to be used on a more routine basis for investigation of full field 3D deformation in materials and structures. In this paper, some new techniques that have been developed to investigate local deformation and failure using digital image correlation techniques are presented. The methods were used to measure both local and global strains during standard straight-sided coupon tensile tests on composite materials made with 12 and 24 k yarns and a 0/+60/-60 triaxial braid architecture. Local deformation and failure within fiber bundles was observed, and this local failure had a significant effect on global stiffness and strength. The matrix material had a large effect on local damage initiation for the two matrix materials used in this investigation. Premature failure in regions of the unit cell near the edge of the straight-sided specimens was observed for transverse tensile tests in which the braid axial fibers were perpendicular to the specimen axis and the bias fibers terminated on the cut edges in the specimen gage section. This edge effect is one factor that could contribute to a measured strength that is lower than the actual material strength in a structure without edge effects.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Thermographic Imaging of Defects in Anisotropic Composites
NASA Technical Reports Server (NTRS)
Plotnikov, Y. A.; Winfree, W. P.
2000-01-01
Composite materials are of increasing interest to the aerospace industry as a result of their weight versus performance characteristics. One of the disadvantages of composites is the high cost of fabrication and post inspection with conventional ultrasonic scanning systems. The high cost of inspection is driven by the need for scanning systems which can follow large curve surfaces. Additionally, either large water tanks or water squirters are required to couple the ultrasonics into the part. Thermographic techniques offer significant advantages over conventional ultrasonics by not requiring physical coupling between the part and sensor. The thermographic system can easily inspect large curved surface without requiring a surface following scanner. However, implementation of Thermal Nondestructive Evaluations (TNDE) for flaw detection in composite materials and structures requires determining its limit. Advanced algorithms have been developed to enable locating and sizing defects in carbon fiber reinforced plastic (CFRP). Thermal Tomography is a very promising method for visualizing the size and location of defects in materials such as CFRP. However, further investigations are required to determine its capabilities for inspection of thick composites. In present work we have studied influence of the anisotropy on the reconstructed image of a defect generated by an inversion technique. The composite material is considered as homogeneous with macro properties: thermal conductivity K, specific heat c, and density rho. The simulation process involves two sequential steps: solving the three dimensional transient heat diffusion equation for a sample with a defect, then estimating the defect location and size from the surface spatial and temporal thermal distributions (inverse problem), calculated from the simulations.
Klein, Angela S; Skinner, Jeremy B; Hawley, Kristin M
2013-12-01
The current study examined two condensed adaptations of dialectical behavior therapy (DBT) for binge eating. Women with full- or sub-threshold variants of either binge eating disorder or bulimia nervosa were randomly assigned to individually supported self-monitoring using adapted DBT diary cards (DC) or group-based DBT, each 15 sessions over 16 weeks. DC sessions focused on problem-solving diary card completion issues, praising diary card completion, and supporting nonjudgmental awareness of eating-related habits and urges, but not formally teaching DBT skills. Group-based DBT included eating mindfulness, progressing through graded exposure; mindfulness, emotion regulation, and distress tolerance skills; and coaching calls between sessions. Both treatments evidenced large and significant improvements in binge eating, bulimic symptoms, and interoceptive awareness. For group-based DBT, ineffectiveness, drive for thinness, body dissatisfaction, and perfectionism also decreased significantly, with medium to large effect sizes. For DC, results were not significant but large in effect size for body dissatisfaction and medium in effect size for ineffectiveness and drive for thinness. Retention for both treatments was higher than recent trends for eating disorder treatment in fee-for-service practice and for similar clinic settings, but favored DC, with the greater attrition of group-based DBT primarily attributed to its more intensive and time-consuming nature, and dropout overall associated with less pretreatment impairment and greater interoceptive awareness. This preliminary investigation suggests that with both abbreviated DBT-based treatments, substantial improvement in core binge eating symptoms is possible, enhancing potential avenues for implementation beyond more time-intensive DBT.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Comparison of eigensolvers for symmetric band matrices.
Moldaschl, Michael; Gansterer, Wilfried N
2014-09-15
We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.