On the minimum orbital intersection distance computation: a new effective method
NASA Astrophysics Data System (ADS)
Hedo, José M.; Ruíz, Manuel; Peláez, Jesús
2018-06-01
The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
HR 7578 - A K dwarf double-lined spectroscopic binary with peculiar abundances
NASA Technical Reports Server (NTRS)
Fekel, F. C., Jr.; Beavers, W. I.
1983-01-01
The number of double-lined K and M dwarf binaries which is currently known is quite small, only a dozen or less of each type. The HR 7578 system was classified as dK5 on the Mount Wilson system and as K2 V on the MK ystem. A summary of radial-velocity measurements including the observatory and weight of each observation is given in a table. The star with the stronger lines has been called component A. The final orbital element solution with all observations appropriately weighted was computed with a differential corrections computer program described by Barker et al. (1967). The program had been modified for the double-lined case. Of particular interest are the very large eccentricity of the system and the large minimum masses for each component. These large minimum masses suggest that eclipses may be detectable despite the relatively long period and small radii of the stars.
NASA Technical Reports Server (NTRS)
Martin, M. W.; Kubiak, E. T.
1982-01-01
A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1962-01-01
The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
({The) Solar System Large Planets influence on a new Maunder Miniμm}
NASA Astrophysics Data System (ADS)
Yndestad, Harald; Solheim, Jan-Erik
2016-04-01
In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.
20 CFR 404.261 - Computing your special minimum primary insurance amount.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your special minimum primary..., SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary Insurance Amounts § 404.261 Computing your special minimum primary insurance amount. (a) Years of coverage...
Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M
2014-01-01
Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578
NASA Technical Reports Server (NTRS)
STACK S. H.
1981-01-01
A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.
Large space structure damping design
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Haviland, J. K.
1983-01-01
Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.
Computational and experimental studies of LEBUs at high device Reynolds numbers
NASA Technical Reports Server (NTRS)
Bertelrud, Arild; Watson, R. D.
1988-01-01
The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.
Prinz, P; Ronacher, B
2002-08-01
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.
NASA Astrophysics Data System (ADS)
Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine
2018-05-01
The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.
20 CFR 704.103 - Removal of certain minimums when computing or paying compensation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Removal of certain minimums when computing or... PROVISIONS FOR LHWCA EXTENSIONS Defense Base Act § 704.103 Removal of certain minimums when computing or... benefits are to be computed under section 9 of the LHWCA, 33 U.S.C. 909, shall not apply in computing...
Effect of local minima on adiabatic quantum optimization.
Amin, M H S
2008-04-04
We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.
Piro, M. H. A.; Simunovic, S.
2016-03-17
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piro, M. H. A.; Simunovic, S.
Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less
Code of Federal Regulations, 2010 CFR
2010-04-01
... RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE When... annuity rate under the overall minimum. A spouse's inclusion in the computation of the overall minimum...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
20 CFR 225.15 - Overall Minimum PIA.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Security Act based on combined railroad and social security earnings. The Overall Minimum PIA is used in computing the social security overall minimum guaranty amount. The overall minimum guaranty rate annuity... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Employee, Spouse and Divorced Spouse Annuities § 225...
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
Thruput Analysis of AFLC CYBER 73 Computers.
1981-12-01
Ref 2:14). This decision permitted a fast conversion effort with minimum programmer/analyst experience (Ref 34). Recently, as the conversion effort...converted (Ref 1:2). 2 . i i i II I i4 Moreover, many of the large data-file and machine-time- consuming systems were not included in the earlier...by LMT personnel revealed that during certain periods i.e., 0000-0800, the machine is normally reserved for the large 3 4 resource- consuming programs
Dynamics of flexible bodies in tree topology - A computer oriented approach
NASA Technical Reports Server (NTRS)
Singh, R. P.; Vandervoort, R. J.; Likins, P. W.
1984-01-01
An approach suited for automatic generation of the equations of motion for large mechanical systems (i.e., large space structures, mechanisms, robots, etc.) is presented. The system topology is restricted to a tree configuration. The tree is defined as an arbitrary set of rigid and flexible bodies connected by hinges characterizing relative translations and rotations of two adjoining bodies. The equations of motion are derived via Kane's method. The resulting equation set is of minimum dimension. Dynamical equations are imbedded in a computer program called TREETOPS. Extensive control simulation capability is built in the TREETOPS program. The simulation is driven by an interactive set-up program resulting in an easy to use analysis tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loef, P.A.; Smed, T.; Andersson, G.
The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less
20 CFR 404.260 - Special minimum primary insurance amounts.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... 404.260 Section 404.260 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary... compute your primary insurance amount, if the special minimum primary insurance amount described in § 404...
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazante, Alexandre P., E-mail: abazante@chem.ufl.edu; Bartlett, Rodney J.; Davidson, E. R.
The benzene radical anion is studied with ab initio coupled-cluster theory in large basis sets. Unlike the usual assumption, we find that, at the level of theory investigated, the minimum energy geometry is non-planar with tetrahedral distortion at two opposite carbon atoms. The anion is well known for its instability to auto-ionization which poses computational challenges to determine its properties. Despite the importance of the benzene radical anion, the considerable attention it has received in the literature so far has failed to address the details of its structure and shape-resonance character at a high level of theory. Here, we examinemore » the dynamic Jahn-Teller effect and its impact on the anion potential energy surface. We find that a minimum energy geometry of C{sub 2} symmetry is located below one D{sub 2h} stationary point on a C{sub 2h} pseudo-rotation surface. The applicability of standard wave function methods to an unbound anion is assessed with the stabilization method. The isotropic hyperfine splitting constants (A{sub iso}) are computed and compared to data obtained from experimental electron spin resonance experiments. Satisfactory agreement with experiment is obtained with coupled-cluster theory and large basis sets such as cc-pCVQZ.« less
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
Code of Federal Regulations, 2010 CFR
2010-04-01
... annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate... second month after the month the child's disability ends, if the child is 18 years old or older, and not...
Computing smallest intervention strategies for multiple metabolic networks in a boolean model.
Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya
2015-02-01
This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
Relationship between fluid bed aerosol generator operation and the aerosol produced
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, R.L.; Yerkes, K.
1980-12-01
The relationships between bed operation in a fluid bed aerosol generator and aerosol output were studied. A two-inch diameter fluid bed aerosol generator (FBG) was constructed using stainless steel powder as a fluidizing medium. Fly ash from coal combustion was aerosolized and the influence of FBG operating parameters on aerosol mass median aerodynamic diameter (MMAD), geometric standard deviation (sigma/sub g/) and concentration was examined. In an effort to extend observations on large fluid beds to small beds using fine bed particles, minimum fluidizing velocities and elutriation constant were computed. Although FBG minimum fluidizing velocity agreed well with calculations, FBG elutriationmore » constant did not. The results of this study show that the properties of aerosols produced by a FBG depend on fluid bed height and air flow through the bed after the minimum fluidizing velocity is exceeded.« less
Mohajerani, Pouyan; Ntziachristos, Vasilis
2013-07-01
The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.
Design of transonic airfoil sections using a similarity theory
NASA Technical Reports Server (NTRS)
Nixon, D.
1978-01-01
A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Baseline Water Demand at Forward Operating Bases
2013-09-15
population, often equaling or exceeding the military population: • Brigade: 6000 soldiers • Battalion: 1000 soldiers • Company : 150 soldiers. ERDC/CERL TR...requirements for a company outpost (COP) of 120 personnel (PAX) in the format that the computer tool generates. This tool generates a basic sus...facilities world-wide through several large contractors. One contractor, Kellogg , Brown, and Root (KBR), used a minimum planning factor of 18.4
Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model
Lu, Wei; Song, Jiangning; Akutsu, Tatsuya
2015-01-01
Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U
2009-05-01
In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.
Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S
2017-05-28
We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au 147 ), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au 147 , and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au 147 is performed, and it is concluded that Au 147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.
NASA Astrophysics Data System (ADS)
Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S.
2017-05-01
We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au147), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au147, and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au147 is performed, and it is concluded that Au147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
Sound production due to large-scale coherent structures
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.
Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds
NASA Technical Reports Server (NTRS)
Jardin, Matthew R.
2004-01-01
A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
NASA Astrophysics Data System (ADS)
Feldmann, Daniel; Bauer, Christian; Wagner, Claus
2018-03-01
We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.
Shumaker, L; Fetterolf, D E; Suhrie, J
1998-01-01
The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.
A FORTRAN program for determining aircraft stability and control derivatives from flight data
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1975-01-01
A digital computer program written in FORTRAN IV for the estimation of aircraft stability and control derivatives is presented. The program uses a maximum likelihood estimation method, and two associated programs for routine, related data handling are also included. The three programs form a package that can be used by relatively inexperienced personnel to process large amounts of data with a minimum of manpower. This package was used to successfully analyze 1500 maneuvers on 20 aircraft, and is designed to be used without modification on as many types of computers as feasible. Program listings and sample check cases are included.
Minimum Conflict Mainstreaming.
ERIC Educational Resources Information Center
Awen, Ed; And Others
Computer technology is discussed as a tool for facilitating the implementation of the mainstreaming process. Minimum conflict mainstreaming/merging (MCM) is defined as an approach which utilizes computer technology to circumvent such structural obstacles to mainstreaming as transportation scheduling, screening and assignment of students, testing,…
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Code of Federal Regulations, 2010 CFR
2010-04-01
... included in computing an annuity under the overall minimum. A divorced spouse's inclusion in the... spouse becomes entitled to a retirement or disability benefit under the Social Security Act based upon a...
OpenCL-based vicinity computation for 3D multiresolution mesh compression
NASA Astrophysics Data System (ADS)
Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri
2017-03-01
3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.
Parallel Computational Protein Design.
Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang
2017-01-01
Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
Online mass storage system detailed requirements document
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U
2010-05-01
Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
26 CFR 1.55-1 - Alternative minimum taxable income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative minimum taxable income. 1.55-1... TAXES Tax Surcharge § 1.55-1 Alternative minimum taxable income. (a) General rule for computing alternative minimum taxable income. Except as otherwise provided by statute, regulations, or other published...
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
An optical spectrum of a large isolated gas-phase PAH cation: C78H26+
Zhen, Junfeng; Mulas, Giacomo; Bonnamy, Anthony; Joblin, Christine
2016-01-01
A gas-phase optical spectrum of a large polycyclic aromatic hydrocarbon (PAH) cation - C78H26+- in the 410-610 nm range is presented. This large all-benzenoid PAH should be large enough to be stable with respect to photodissociation in the harsh conditions prevailing in the interstellar medium (ISM). The spectrum is obtained via multi-photon dissociation (MPD) spectroscopy of cationic C78H26 stored in the Fourier Transform Ion Cyclotron Resonance (FT-ICR) cell using the radiation from a mid-band optical parametric oscillator (OPO) laser. The experimental spectrum shows two main absorption peaks at 431 nm and 516 nm, in good agreement with a theoretical spectrum computed via time-dependent density functional theory (TD-DFT). DFT calculations indicate that the equilibrium geometry, with the absolute minimum energy, is of lowered, nonplanar C2 symmetry instead of the more symmetric planar D2h symmetry that is usually the minimum for similar PAHs of smaller size. This kind of slightly broken symmetry could produce some of the fine structure observed in some diffuse interstellar bands (DIBs). It can also favor the folding of C78H26+ fragments and ultimately the formation of fullerenes. This study opens up the possibility to identify the most promising candidates for DIBs amongst large cationic PAHs. PMID:26942230
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Rogers, J. L., Jr.
1986-01-01
A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Modal analysis of circular Bragg fibers with arbitrary index profiles
NASA Astrophysics Data System (ADS)
Horikis, Theodoros P.; Kath, William L.
2006-12-01
A finite-difference approach based upon the immersed interface method is used to analyze the mode structure of Bragg fibers with arbitrary index profiles. The method allows general propagation constants and eigenmodes to be calculated to a high degree of accuracy, while computation times are kept to a minimum by exploiting sparse matrix algebra. The method is well suited to handle complicated structures comprised of a large number of thin layers with high-index contrast and simultaneously determines multiple eigenmodes without modification.
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution
NASA Astrophysics Data System (ADS)
Liu, Yihang; Han, Yu; Li, Fengjiao; Zhang, Qican
2018-06-01
In three-dimensional (3D) shape measurement based on phase analysis, the phase analysis process usually produces a wrapped phase map ranging from - π to π with some 2 π discontinuities, and thus a phase unwrapping algorithm is necessary to recover the continuous and nature phase map from which 3D height distribution can be restored. Usually, the minimum discontinuity phase unwrapping algorithm can be used to solve many different kinds of phase unwrapping problems, but its main drawback is that it requires a large amount of computations and has low efficiency in searching for the improving loop within the phase's discontinuity area. To overcome this drawback, an improvement to speedup of the minimum discontinuity phase unwrapping algorithm by using the phase distribution on reference plane is proposed. In this improved algorithm, before the minimum discontinuity phase unwrapping algorithm is carried out to unwrap phase, an integer number K was calculated from the ratio of the wrapped phase to the nature phase on a reference plane. And then the jump counts of the unwrapped phase can be reduced by adding 2K π, so the efficiency of the minimum discontinuity phase unwrapping algorithm is significantly improved. Both simulated and experimental data results verify the feasibility of the proposed improved algorithm, and both of them clearly show that the algorithm works very well and has high efficiency.
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Solar-Cycle Variability of Magnetosheath Fluctuations at Earth and Venus
NASA Astrophysics Data System (ADS)
Dwivedi, N. K.; Narita, Y.; Kovacs, P.
2014-12-01
The magnetosheath is a region between the bow-shock and magnetopause and the magnetosheath plasma is mostly in the turbulent state. In the present investigation we put an effort to closely examine the magnetosheath fluctuations dependency on the solar-cycles (solar-maximum and solar minimum) at the magnetized planetary body (Earth) and their comparison with the un-magnetized planetary body (Venus) for the solar minimum. We use the CLUSTER FGM data for the solar-maximum (2001-2002), solar-minimum (2006-2008) and Venus fluxgate magnetometer data for the solar-minimum (2006-2008) to perform a comparative statistical study on the energy spectra and probability density function (PDF) and asses the spectral features of the magnetic fluctuations of the both planetary bodies. In the comparison we study the relation between the inertial ranges of the spectra and the temporal scales of non-Gaussian magnetic fluctuations derived from PDF analyses. The first can refer to turbulent cascade dynamics, while the latter may indicate intermittency. We first transformed the magnetic field data into mean field aligned coordinate system with respect to the large-scale magnetic field direction and then after we compute the power spectral density with the help of Welch algorithm. The computed energy spectra of Earth's magnetosheath show a moderate variability with the solar-cycles and have a broader inertial range. However the estimated energy spectra for the solar-minimum at Venus give the clear evidence of the existence of the break point in the vicinity of the ion gyroradius. After the break-point the energy spectra become steeper and show a distinctive spectral scales which is interpreted as the realization of the begging of the energy cascade. We also briefly address the influence of turbulence on the plasma transport and wave dynamics responsible for the spectral break and predict spectral features of the energy spectra for the solar-maximum at Venus based on the results obtained for the solar-minimum. The research leading to these results has received funding from the European Community's Seventh Framework Programme ([FP7/2007-2013]) under grant agreement number 313038/STORM.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Limitations on certain capital losses and excess credits in computing alternative minimum tax. [Reserved] 1.383-2 Section 1.383-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Insolvency...
Code of Federal Regulations, 2011 CFR
2011-01-01
... interest rate and foreign exchange rate contracts are computed on the basis of the credit equivalent amounts of such contracts. Credit equivalent amounts are computed for each of the following off-balance... Equivalent Amounts a. The minimum capital components for interest rate and foreign exchange rate contracts...
Subgrid-scale models for large-eddy simulation of rotating turbulent flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel
2016-11-01
This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.
Ferguson, Adam R.; Popovich, Phillip G.; Xu, Xiao-Ming; Snow, Diane M.; Igarashi, Michihiro; Beattie, Christine E.; Bixby, John L.
2014-01-01
Abstract The lack of reproducibility in many areas of experimental science has a number of causes, including a lack of transparency and precision in the description of experimental approaches. This has far-reaching consequences, including wasted resources and slowing of progress. Additionally, the large number of laboratories around the world publishing articles on a given topic make it difficult, if not impossible, for individual researchers to read all of the relevant literature. Consequently, centralized databases are needed to facilitate the generation of new hypotheses for testing. One strategy to improve transparency in experimental description, and to allow the development of frameworks for computer-readable knowledge repositories, is the adoption of uniform reporting standards, such as common data elements (data elements used in multiple clinical studies) and minimum information standards. This article describes a minimum information standard for spinal cord injury (SCI) experiments, its major elements, and the approaches used to develop it. Transparent reporting standards for experiments using animal models of human SCI aim to reduce inherent bias and increase experimental value. PMID:24870067
NASA Technical Reports Server (NTRS)
vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.
2013-01-01
Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.
NASA Technical Reports Server (NTRS)
vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.
2012-01-01
Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.
Integrated computer-aided design using minicomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.
1980-01-01
Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Du, Shichuan; Martinez, Aleix M.
2013-01-01
Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409
Radiative Transfer and Satellite Remote Sensing of Cirrus Clouds Using FIRE-2-IFO Data
NASA Technical Reports Server (NTRS)
2000-01-01
Under the support of the NASA grant, we have developed a new geometric-optics model (GOM2) for the calculation of the single-scattering and polarization properties for arbitrarily oriented hexagonal ice crystals. From comparisons with the results computed by the finite difference time domain (FDTD) method, we show that the novel geometric-optics can be applied to the computation of the extinction cross section and single-scattering albedo for ice crystals with size parameters along the minimum dimension as small as approximately 6. We demonstrate that the present model converges to the conventional ray tracing method for large size parameters and produces single-scattering results close to those computed by the FDTD method for size parameters along the minimum dimension smaller than approximately 20. We demonstrate that neither the conventional geometric optics method nor the Lorenz-Mie theory can be used to approximate the scattering, absorption, and polarization features for hexagonal ice crystals with size parameters from approximately 5 to 20. On the satellite remote sensing algorithm development and validation, we have developed a numerical scheme to identify multilayer cirrus cloud systems using AVHRR data. We have applied this scheme to the satellite data collected over the FIRE-2-IFO area during nine overpasses within seven observation dates. Determination of the threshold values used in the detection scheme are based on statistical analyses of these satellite data.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2014-01-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
25 CFR 542.14 - What are the minimum internal control standards for the cage?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for the cage? 542.14 Section 542.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for the cage? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
25 CFR 542.8 - What are the minimum internal control standards for pull tabs?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...
HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.
O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D
2015-04-01
The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Wiley, Jeffrey B.
2006-01-01
Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.
Effects of forebody geometry on subsonic boundary-layer stability
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1990-01-01
As part of an effort to develop computational techniques for design of natural laminar flow fuselages, a computational study was made of the effect of forebody geometry on laminar boundary layer stability on axisymmetric body shapes. The effects of nose radius on the stability of the incompressible laminar boundary layer was computationally investigated using linear stability theory for body length Reynolds numbers representative of small and medium-sized airplanes. The steepness of the pressure gradient and the value of the minimum pressure (both functions of fineness ratio) govern the stability of laminar flow possible on an axisymmetric body at a given Reynolds number. It was found that to keep the laminar boundary layer stable for extended lengths, it is important to have a small nose radius. However, nose shapes with extremely small nose radii produce large pressure peaks at off-design angles of attack and can produce vortices which would adversely affect transition.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
26 CFR 1.6655-3 - Adjusted seasonal installment method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1... under § 1.6655-2 apply to the computation of taxable income (and resulting tax) for purposes of... applying to alternative minimum taxable income, tentative minimum tax, and alternative minimum tax, the...
5 CFR 844.303 - Minimum disability annuity.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Minimum disability annuity. 844.303... Annuity § 844.303 Minimum disability annuity. Notwithstanding any other provision of this part, an annuity payable under this part cannot be less than the amount of an annuity computed under 5 U.S.C. 8415...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
Dynamic modeling and ascent flight control of Ares-I Crew Launch Vehicle
NASA Astrophysics Data System (ADS)
Du, Wei
This research focuses on dynamic modeling and ascent flight control of large flexible launch vehicles such as the Ares-I Crew Launch Vehicle (CLV). A complete set of six-degrees-of-freedom dynamic models of the Ares-I, incorporating its propulsion, aerodynamics, guidance and control, and structural flexibility, is developed. NASA's Ares-I reference model and the SAVANT Simulink-based program are utilized to develop a Matlab-based simulation and linearization tool for an independent validation of the performance and stability of the ascent flight control system of large flexible launch vehicles. A linearized state-space model as well as a non-minimum-phase transfer function model (which is typical for flexible vehicles with non-collocated actuators and sensors) are validated for ascent flight control design and analysis. This research also investigates fundamental principles of flight control analysis and design for launch vehicles, in particular the classical "drift-minimum" and "load-minimum" control principles. It is shown that an additional feedback of angle-of-attack can significantly improve overall performance and stability, especially in the presence of unexpected large wind disturbances. For a typical "non-collocated actuator and sensor" control problem for large flexible launch vehicles, non-minimum-phase filtering of "unstably interacting" bending modes is also shown to be effective. The uncertainty model of a flexible launch vehicle is derived. The robust stability of an ascent flight control system design, which directly controls the inertial attitude-error quaternion and also employs the non-minimum-phase filters, is verified by the framework of structured singular value (mu) analysis. Furthermore, nonlinear coupled dynamic simulation results are presented for a reference model of the Ares-I CLV as another validation of the feasibility of the ascent flight control system design. Another important issue for a single main engine launch vehicle is stability under mal-function of the roll control system. The roll motion of the Ares-I Crew Launch Vehicle under nominal flight conditions is actively stabilized by its roll control system employing thrusters. This dissertation describes the ascent flight control design problem of Ares-I in the event of disabled or failed roll control. A simple pitch/yaw control logic is developed for such a technically challenging problem by exploiting the inherent versatility of a quaternion-based attitude control system. The proposed scheme requires only the desired inertial attitude quaternion to be re-computed using the actual uncontrolled roll angle information to achieve an ascent flight trajectory identical to the nominal flight case with active roll control. Another approach that utilizes a simple adjustment of the proportional-derivative gains of the quaternion-based flight control system without active roll control is also presented. This approach doesn't require the re-computation of desired inertial attitude quaternion. A linear stability criterion is developed for proper adjustments of attitude and rate gains. The linear stability analysis results are validated by nonlinear simulations of the ascent flight phase. However, the first approach, requiring a simple modification of the desired attitude quaternion, is recommended for the Ares-I as well as other launch vehicles in the event of no active roll control. Finally, the method derived to stabilize a large flexible launch vehicle in the event of uncontrolled roll drift is generalized as a modified attitude quaternion feedback law. It is used to stabilize an axisymmetric rigid body by two independent control torques.
Exploiting Identical Generators in Unit Commitment
Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul
2017-12-14
Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less
Exploiting Identical Generators in Unit Commitment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul
Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less
Computing Trimmed, Mean-Camber Surfaces At Minimum Drag
NASA Technical Reports Server (NTRS)
Lamar, John E.; Hodges, William T.
1995-01-01
VLMD computer program determines subsonic mean-camber surfaces of trimmed noncoplanar planforms with minimum vortex drag at specified lift coefficient. Up to two planforms designed together. Method used that of subsonic vortex lattice method of chord loading specification, ranging from rectangular to triangular, left specified by user. Program versatile and applied to isolated wings, wing/canard configurations, tandem wing, and wing/-winglet configuration. Written in FORTRAN.
Towards Dynamic Remote Data Auditing in Computational Clouds
Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114
Towards dynamic remote data auditing in computational clouds.
Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.
Einstein-Home search for periodic gravitational waves in early S5 LIGO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, B. P.; Abbott, R.; Adhikari, R.
This paper reports on an all-sky search for periodic gravitational waves from sources such as deformed isolated rapidly spinning neutron stars. The analysis uses 840 hours of data from 66 days of the fifth LIGO science run (S5). The data were searched for quasimonochromatic waves with frequencies f in the range from 50 to 1500 Hz, with a linear frequency drift f (measured at the solar system barycenter) in the range -f/{tau}
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
Prognostic significance of lesion size for glioblastoma multiforme.
Reeves, G I; Marks, J E
1979-08-01
From March 1974 to December 1976, 56 patients with glioblastoma multiforme had precraniotomy computed tomography (CT) scans from which the lesion size was determined by measuring the cross-sectional area. Thirty-two patients underwent surgery followed by irradiation, and 24 had surgery followed by irradiation and chemotherapy. There was no difference in survival between the 16 patients with small lesions and the 16 patients with large lesions in the surgery plus radiation alone group, nor in the 16 patients with small and 8 patients with large lesions in the surgery, radiation and chemotherapy group. Minimum follow-up was one year. Other possible prognostic factors including age, tumor grade, radiation dose, and performance status were comparable for each subgroup. Lesion size in glioblastoma multiforme appears unrelated to prognosis.
Harmonic Fourier beads method for studying rare events on rugged energy surfaces.
Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L
2006-11-07
We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.
Effect of electromagnetic radiation on the coils used in aneurysm embolization.
Lv, Xianli; Wu, Zhongxue; Li, Youxiang
2014-06-01
This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday's electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study.
Effect of Electromagnetic Radiation on the Coils Used in Aneurysm Embolization
Lv, Xianli; Wu, Zhongxue; Li, Youxiang
2014-01-01
Summary This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday’s electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study. PMID:24976203
Method and Apparatus for Powered Descent Guidance
NASA Technical Reports Server (NTRS)
Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)
2013-01-01
A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.
Mirković, Sinisa; Budak, Igor; Puskar, Tatjana; Tadić, Ana; Sokac, Mario; Santosi, Zeljko; Djurdjević-Mirković, Tatjana
2015-12-01
An autologous bone (bone derived from the patient himself) is considered to be a "golden standard" in the treatment of bone defects and partial atrophic alveolar ridge. However, large defects and bone losses are difficult to restore in this manner, because extraction of large amounts of autologous tissue can cause donor-site problems. Alternatively, data from computed tomographic (CT) scan can be used to shape a precise 3D homologous bone block using a computer-aided design-computer-aided manufacturing (CAD-CAM) system. A 63-year old male patient referred to the Clinic of Dentistry of Vojvodina in Novi Sad, because of teeth loss in the right lateral region of the lower jaw. Clinical examination revealed a pronounced resorption of the residual ridge of the lower jaw in the aforementioned region, both horizontal and vertical. After clinical examination, the patient was referred for 3D cone beam (CB)CT scan that enables visualization of bony structures and accurate measurement of dimensions of the residual alveolar ridge. Considering the large extent of bone resorption, the required ridge augmentation was more than 3 mm in height and 2 mm in width along the length of some 2 cm, thus the use of granular material was excluded. After consulting prosthodontists and engineers from the Faculty of Technical Sciences in Novi Sad we decided to fabricate an individual (custom) bovine-derived bone graft designed according to the obtained-3D CBCT scan. Application of 3D CBCT images, computer-aided systems and software in manufacturing custom bone grafts represents the most recent method of guided bone regeneration. This method substantially reduces time of recovery and carries minimum risk of postoperative complications, yet the results fully satisfy the requirements of both the patient and the therapist.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-04-01
Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.
Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia
2016-01-01
Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547
NASA Astrophysics Data System (ADS)
Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj
2018-02-01
N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.
Computer simulations of optimum boost and buck-boost converters
NASA Technical Reports Server (NTRS)
Rahman, S.
1982-01-01
The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.
Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo
2010-09-15
Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.
Effect of Variable Spatial Scales on USLE-GIS Computations
NASA Astrophysics Data System (ADS)
Patil, R. J.; Sharma, S. K.
2017-12-01
Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.
Large-deformation modal coordinates for nonrigid vehicle dynamics
NASA Technical Reports Server (NTRS)
Likins, P. W.; Fleischer, G. E.
1972-01-01
The derivation of minimum-dimension sets of discrete-coordinate and hybrid-coordinate equations of motion of a system consisting of an arbitrary number of hinge-connected rigid bodies assembled in tree topology is presented. These equations are useful for the simulation of dynamical systems that can be idealized as tree-like arrangements of substructures, with each substructure consisting of either a rigid body or a collection of elastically interconnected rigid bodies restricted to small relative rotations at each connection. Thus, some of the substructures represent elastic bodies subjected to small strains or local deformations, but possibly large gross deformations, in the hybrid formulation, distributed coordinates referred to herein as large-deformation modal coordinates, are used for the deformations of these substructures. The equations are in a form suitable for incorporation into one or more computer programs to be used as multipurpose tools in the simulation of spacecraft and other complex electromechanical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary S.
2017-09-01
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
NASA Astrophysics Data System (ADS)
Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.
2017-01-01
Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
Heavy Analysis and Light Virtualization of Water Use Data with Python
NASA Astrophysics Data System (ADS)
Kim, H.; Bijoor, N.; Famiglietti, J. S.
2014-12-01
Water utilities possess a large amount of water data that could be used to inform urban ecohydrology, management decisions, and conservation policies, but such data are rarely analyzed owing to difficulty in analyzation, visualization, and interpretion. We have developed a high performance computing resource for this purpose. We partnered with 6 water agencies in Orange County who provided 10 years of parcel-level monthly water use billing data for a pilot study. The first challenge that we overcame was to refine all human errors and unify the many different formats of data over all agencies. Second, we tested and applied experimental approaches to the data, including complex calculations, with high efficiency. Third, we developed a method to refine the data so it can be browsed along a time series index and/or geo-spatial queries with high efficiency, no matter how large the data. Python scientific libraries were the best match to handle arbitrary data sets in our environment. Further milestones include agency entry, sets of formulae, and maintaining 15M rows X 70 columns of data with high performance of cpu-bound processes. To deal with billions of rows, we performed an analysis virtualization stack by leveraging iPython parallel computing. With this architecture, one agency could be considered one computing node or virtual machine that maintains its own data sets respectively. For example, a big agency could use a large node, and a small agency could use a micro node. Under the minimum required raw data specs, more agencies could be analyzed. The program developed in this study simplifies data analysis, visualization, and interpretation of large water datasets, and can be used to analyze large data volumes from water agencies nationally or worldwide.
The reliable solution and computation time of variable parameters logistic model
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Pan, Xinnong
2018-05-01
The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.
Free energy decomposition of protein-protein interactions.
Noskov, S Y; Lim, C
2001-08-01
A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.
Interferometer for Space Station Windows
NASA Technical Reports Server (NTRS)
Hall, Gregory
2003-01-01
Inspection of space station windows for micrometeorite damage would be a difficult task insitu using current inspection techniques. Commercially available optical profilometers and inspection systems are relatively large, about the size of a desktop computer tower, and require a stable platform to inspect the test object. Also, many devices currently available are designed for a laboratory or controlled environments requiring external computer control. This paper presents an approach using a highly developed optical interferometer to inspect the windows from inside the space station itself using a self- contained hand held device. The interferometer would be capable as a minimum of detecting damage as small as one ten thousands of an inch in diameter and depth while interrogating a relatively large area. The current developmental state of this device is still in the proof of concept stage. The background section of this paper will discuss the current state of the art of profilometers as well as the desired configuration of the self-contained, hand held device. Then, a discussion of the developments and findings that will allow the configuration change with suggested approaches appearing in the proof of concept section.
NASA Astrophysics Data System (ADS)
Maqui, Agustin Francisco
Turbulence in high-speed flows is an important problem in aerospace applications, yet extremely difficult from a theoretical, computational and experimental perspective. A main reason for the lack of complete understanding is the difficulty of generating turbulence in the lab at a range of speeds which can also include hypersonic effects such as thermal non-equilibrium. This work studies the feasibility of a new approach to generate turbulence based on laser-induced photo-excitation/dissociation of seeded molecules. A large database of incompressible and compressible direct numerical simulations (DNS) has been generated to systematically study the development and evolution of the flow towards realistic turbulence. Governing parameters and the conditions necessary for the establishment of turbulence, as well as the length and time scales associated with such process, are identified. For both the compressible and incompressible experiments a minimum Reynolds number is found to be needed for the flow to evolve towards fully developed turbulence. Additionally, for incompressible cases a minimum time scale is required, while for compressible cases a minimum distance from the grid and limit on the maximum temperature introduced are required. Through an extensive analysis of single and two point statistics, as well as spectral dynamics, the primary mechanisms leading to turbulence are shown. As commonly done in compressible turbulence, dilatational and solenoidal components are separated to understand the effect of acoustics on the development of turbulence. Finally, a large database of forced isotropic turbulence has been generated to study the effect of internal degrees of freedom on the evolution of turbulence.
A comparison of approaches for finding minimum identifying codes on graphs
NASA Astrophysics Data System (ADS)
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
A Microworld Approach to the Formalization of Musical Knowledge.
ERIC Educational Resources Information Center
Honing, Henkjan
1993-01-01
Discusses the importance of applying computational modeling and artificial intelligence techniques to music cognition and computer music research. Recommends three uses of microworlds to trim computational theories to their bare minimum, allowing for better and easier comparison. (CFR)
Automated design of minimum drag light aircraft fuselages and nacelles
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Fox, S. R.; Karlin, B. E.
1982-01-01
The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
25 CFR 542.10 - What are the minimum internal control standards for keno?
Code of Federal Regulations, 2012 CFR
2012-04-01
... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...
25 CFR 542.10 - What are the minimum internal control standards for keno?
Code of Federal Regulations, 2013 CFR
2013-04-01
... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...
Adapting Teaching Strategies To Encompass New Technologies.
ERIC Educational Resources Information Center
Oravec, Jo Ann
2001-01-01
The explosion of special-purpose computing devices--Internet appliances, handheld computers, wireless Internet, networked household appliances--challenges business educators attempting to provide computer literacy education. At a minimum, they should address connectivity, expanded applications, and social and public policy implications of these…
Pressure fluctuation generated by the interaction of blade and tongue
NASA Astrophysics Data System (ADS)
Zheng, Lulu; Dou, Hua-Shu; Chen, Xiaoping; Zhu, Zuchao; Cui, Baoling
2018-02-01
Pressure fluctuation around the tongue has large effect on the stable operation of a centrifugal pump. In this paper, the Reynolds averaged Navier-Stokes equations (RANS) and the RNG k-epsilon turbulence model is employed to simulate the flow in a pump. The flow field in the centrifugal pump is computed for a range of flow rate. The simulation results have been compared with the experimental data and good agreement has been achieved. In order to study the interaction of the tongue with the impeller, fifteen monitor probes are evenly distributed circumferentially at three radii around the tongue. Pressure distribution is investigated at various blade positions while the blade approaches to and leaves the tongue region. Results show that pressure signal fluctuates largely around the tongue, and it is more intense near the tongue surface. At design condition, standard deviation of pressure fluctuation is the minimum. At large flow rate, the increased low pressure region at the blade trailing edge results in the increases of pressure fluctuation amplitude and pressure spectra at the monitor probes. Minimum pressure is obtained when the blade is facing to the tongue. It is found that the amplitude of pressure fluctuation strongly depends on the blade positions at large flow rate, and pressure fluctuation is caused by the relative movement between blades and tongue. At small flow rate, the rule of pressure fluctuation is mainly depending on the structure of vortex flow at blade passage exit besides the influence from the relative position between the blade and the tongue.
Structural and mechanical properties of glassy water in nanoscale confinement.
Lombardo, Thomas G; Giovambattista, Nicolás; Debenedetti, Pablo G
2009-01-01
We investigate the structure and mechanical properties of glassy water confined between silica-based surfaces with continuously tunable hydrophobicity and hydrophilicity by computing and analyzing minimum energy, mechanically stable configurations (inherent structures). The structured silica substrate imposes long-range order on the first layer of water molecules under hydrophobic confinement at high density (p > or = 1.0 g cm(-3)). This proximal layer is also structured in hydrophilic confinement at very low density (p approximately 0.4 g cm(-3)). The ordering of water next to the hydrophobic surface greatly enhances the mechanical strength of thin films (0.8 nm). This leads to a substantial stress anisotropy; the transverse strength of the film exceeds the normal strength by 500 MPa. The large transverse strength results in a minimum in the equation of state of the energy landscape that does not correspond to a mechanical instability, but represents disruption of the ordered layer of water next to the wall. In addition, we find that the mode of mechanical failure is dependent on the type of confinement. Under large lateral strain, water confined by hydrophilic surfaces preferentially forms voids in the middle of the film and fails cohesively. In contrast, water under hydrophobic confinement tends to form voids near the walls and fails by loss of adhesion.
[Activities of Research Institute for Advanced Computer Science
NASA Technical Reports Server (NTRS)
Gross, Anthony R. (Technical Monitor); Leiner, Barry M.
2001-01-01
The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.
Cyclic Evolution of Coronal Fields from a Coupled Dynamo Potential-Field Source-Surface Model.
Dikpati, Mausumi; Suresh, Akshaya; Burkepile, Joan
The structure of the Sun's corona varies with the solar-cycle phase, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. It is widely accepted that the large-scale coronal structure is governed by magnetic fields that are most likely generated by dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential-field source-surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation; these dynamo-generated fields are extended from the photosphere to the corona using a potential-field source-surface model. Assuming axisymmetry, we take linear combinations of associated Legendre polynomials that match the more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986 - 1991), we compute the coefficients of the associated Legendre polynomials up to degree eight and compare with observations. We show that at minimum the dipole term dominates, but it fades as the cycle progresses; higher-order multipolar terms begin to dominate. The amplitudes of these terms are not exactly the same for the two limbs, indicating that there is a longitude dependence. While both the 1986 and the 1996 minimum coronas were dipolar, the minimum in 2008 was unusual, since there was a substantial departure from a dipole. We investigate the physical cause of this departure by including a North-South asymmetry in the surface source of the magnetic fields in our flux-transport dynamo model, and find that this asymmetry could be one of the reasons for departure from the dipole in the 2008 minimum.
Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.; ...
2017-08-21
We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.
We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
NASA Astrophysics Data System (ADS)
Sarkar, Shubhra; Ramanathan, N.; Gopi, R.; Sundararajan, K.
2017-12-01
Hydrogen bonded interaction of pyrrole multimer and acetylene-pyrrole complexes were studied in N2 and p-H2 matrixes. DFT computations showed T-shaped geometry for the pyrrole dimer and cyclic complex for the trimer and tetramer were the most stable structures, stabilized by Nsbnd H⋯π interactions. The experimental vibrational wavenumbers observed in N2 and p-H2 matrixes for the pyrrole multimers were correlated with the computed wavenumbers. Computations performed at MP2/aug-cc-pVDZ level of theory showed that C2H2 and C4H5N forms 1:1 hydrogen-bonded complexes stabilized by Csbnd H⋯π interaction (Complex A), Nsbnd H⋯π interaction (Complex B) and π⋯π interaction (Complex C), where the former complex is the global minimum and latter two complexes were the first and second local minima, respectively. Experimentally, 1:1 C2H2sbnd C4H5N complexes A (global minimum) and B (first local minimum) were identified from the shifts in the Nsbnd H stretching, Nsbnd H bending, Csbnd H bending region of pyrrole and Csbnd H asymmetric stretching and bending region of C2H2 in N2 and p-H2 matrixes. Computations were also performed for the higher complexes and found two minima corresponding to the 1:2 C2H2sbnd C4H5N and three minima for the 2:1 C2H2sbnd C4H5N complexes. Experimentally the global minimum 1:2 and 2:1 C2H2sbnd C4H5N complexes were identified in N2 and p-H2 matrixes.
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2013 CFR
2013-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2012 CFR
2012-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2014 CFR
2014-04-01
... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...
Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei
2013-10-01
The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.
Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%. PMID:29387141
Ball bearing heat analysis program (BABHAP)
NASA Technical Reports Server (NTRS)
1978-01-01
The Ball Bearing Heat Analysis Program (BABHAP) is an attempt to assemble a series of equations, some of which are non-linear algebraic systems, in a logical order, which when solved, provide a complex analysis of load distribution among the balls, ball velocities, heat generation resulting from friction, applied load, and ball spinning, minimum lubricant film thickness, and many additional characteristics of ball bearing systems. Although initial design requirements for BABHAP were dictated by the core limitations of the PDP 11/45 computer, (approximately 8K of real words with limited number of instructions) the program dimensions can easily be expanded for large core computers such as the UNIVAC 1108. The PDP version of BABHAP is also operational on the UNIVAC system with the exception that the PDP uses 029 punch and the UNIVAC uses 026. A conversion program was written to allow transfer between machines.
Ascent velocity and dynamics of the Fiumicino mud eruption, Rome, Italy
NASA Astrophysics Data System (ADS)
Vona, A.; Giordano, G.; De Benedetti, A. A.; D'Ambrosio, R.; Romano, C.; Manga, M.
2015-08-01
In August 2013 drilling triggered the eruption of mud near the international airport of Fiumicino (Rome, Italy). We monitored the evolution of the eruption and collected samples for laboratory characterization of physicochemical and rheological properties. Over time, muds show a progressive dilution with water; the rheology is typical of pseudoplastic fluids, with a small yield stress that decreases as mud density decreases. The eruption, while not naturally triggered, shares several similarities with natural mud volcanoes, including mud componentry, grain-size distribution, gas discharge, and mud rheology. We use the size of large ballistic fragments ejected from the vent along with mud rheology to compute a minimum ascent velocity of the mud. Computed values are consistent with in situ measurements of gas phase velocities, confirming that the stratigraphic record of mud eruptions can be quantitatively used to infer eruption history and ascent rates and hence to assess (or reassess) mud eruption hazards.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael
2016-01-01
Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.
29 CFR 541.604 - Minimum guarantee plus extras.
Code of Federal Regulations, 2010 CFR
2010-07-01
... DEFINING AND DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Salary Requirements § 541.604 Minimum guarantee plus extras. (a) An employer may provide... commission on sales. An exempt employee also may receive a percentage of the sales or profits of the employer...
Emadi Andani, Mehran; Bahrami, Fariba
2012-10-01
Flash and Hogan (1985) suggested that the CNS employs a minimum jerk strategy when planning any given movement. Later, Nakano et al. (1999) showed that minimum angle jerk predicts the actual arm trajectory curvature better than the minimum jerk model. Friedman and Flash (2009) confirmed this claim. Besides the behavioral support that we will discuss, we will show that this model allows simplicity in planning any given movement. In particular, we prove mathematically that each movement that satisfies the minimum joint angle jerk condition is reproducible by a linear combination of six functions. These functions are calculated independent of the type of the movement and are normalized in the time domain. Hence, we call these six universal functions the Movement Elements (ME). We also show that the kinematic information at the beginning and end of the movement determines the coefficients of the linear combination. On the other hand, in analyzing recorded data from sit-to-stand (STS) transfer, arm-reaching movement (ARM) and gait, we observed that minimum joint angle jerk condition is satisfied only during different successive phases of these movements and not for the entire movement. Driven by these observations, we assumed that any given ballistic movement may be decomposed into several successive phases without overlap, such that for each phase the minimum joint angle jerk condition is satisfied. At the boundaries of each phase the angular acceleration of each joint should obtain its extremum (zero third derivative). As a consequence, joint angles at each phase will be linear combinations of the introduced MEs. Coefficients of the linear combination at each phase are the values of the joint kinematics at the boundaries of that phase. Finally, we conclude that these observations may constitute the basis of a computational interpretation, put differently, of the strategy used by the Central Nervous System (CNS) for motor planning. We call this possible interpretation "Coordinated Minimum Angle jerk Policy" or COMAP. Based on this policy, the function of the CNS in generating the desired pattern of any given task (like STS, ARM or gait) can be described computationally using three factors: (1) the kinematics of the motor system at given body states, i.e., at certain movement events/instances, (2) the time length of each phase, and (3) the proposed MEs. From a computational point of view, this model significantly simplifies the processes of movement planning as well as feature abstraction for saving characterizing information of any given movement in memory. Copyright © 2012 Elsevier B.V. All rights reserved.
Computer-aided design of high-frequency transistor amplifiers.
NASA Technical Reports Server (NTRS)
Hsieh, C.-C.; Chan, S.-P.
1972-01-01
A systematic step-by-step computer-aided procedure for designing high-frequency transistor amplifiers is described. The technique makes it possible to determine the optimum source impedance which gives a minimum noise figure.
36 CFR 1120.52 - Computerized records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... organizations and upon the particular types of computer and associated equipment and the amounts of time on such... from the computer which permits copying the printout, the material will be made available at the per... information from computerized records frequently involves a minimum computer time cost of approximately $100...
36 CFR 1120.52 - Computerized records.
Code of Federal Regulations, 2011 CFR
2011-07-01
... organizations and upon the particular types of computer and associated equipment and the amounts of time on such... from the computer which permits copying the printout, the material will be made available at the per... information from computerized records frequently involves a minimum computer time cost of approximately $100...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex... (GPS) Y RWY 20, Amdt 1B Cambridge, MN, Cambridge Muni, Takeoff Minimums and Obstacle DP, Orig Pipestone, MN, Pipestone Muni, NDB RWY 36, Amdt 7, CANCELLED Rushford, MN, Rushford Muni, Takeoff Minimums and...
Flow convergence caused by a salinity minimum in a tidal channel
Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey
2006-01-01
Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
12 CFR 1750.4 - Minimum capital requirement computation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... amounts: (1) 2.50 percent times the aggregate on-balance sheet assets of the Enterprise; (2) 0.45 percent times the unpaid principal balance of mortgage-backed securities and substantially equivalent... last day of the quarter just ended (or the date for which the minimum capital report is filed, if...
Establishing Proficiency Standards for High School Graduation.
ERIC Educational Resources Information Center
Herron, Marshall D.
The Oregon State Board of Education has rejected the use of cut-off scores on a proficiency test to establish minimum performance standards for high school graduation. Instead, each school district is required to specify--by local board adoption--minimum competencies in reading, writing, listening, speaking, analyzing, and computing. These…
20 CFR 229.48 - Family maximum.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a) Family... month on one person's earnings record is limited. This limited amount is called the family maximum. The...
Robustness of mission plans for unmanned aircraft
NASA Astrophysics Data System (ADS)
Niendorf, Moritz
This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Computer Model for Sizing Rapid Transit Tunnel Diameters
DOT National Transportation Integrated Search
1976-01-01
A computer program was developed to assist the determination of minimum tunnel diameters for electrified rapid transit systems. Inputs include vehicle shape, walkway location, clearances, and track geometrics. The program written in FORTRAN IV calcul...
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
An electrically reconfigurable logic gate intrinsically enabled by spin-orbit materials.
Kazemi, Mohammad
2017-11-10
The spin degree of freedom in magnetic devices has been discussed widely for computing, since it could significantly reduce energy dissipation, might enable beyond Von Neumann computing, and could have applications in quantum computing. For spin-based computing to become widespread, however, energy efficient logic gates comprising as few devices as possible are required. Considerable recent progress has been reported in this area. However, proposals for spin-based logic either require ancillary charge-based devices and circuits in each individual gate or adopt principals underlying charge-based computing by employing ancillary spin-based devices, which largely negates possible advantages. Here, we show that spin-orbit materials possess an intrinsic basis for the execution of logic operations. We present a spin-orbit logic gate that performs a universal logic operation utilizing the minimum possible number of devices, that is, the essential devices required for representing the logic operands. Also, whereas the previous proposals for spin-based logic require extra devices in each individual gate to provide reconfigurability, the proposed gate is 'electrically' reconfigurable at run-time simply by setting the amplitude of the clock pulse applied to the gate. We demonstrate, analytically and numerically with experimentally benchmarked models, that the gate performs logic operations and simultaneously stores the result, realizing the 'stateful' spin-based logic scalable to ultralow energy dissipation.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
Application-oriented offloading in heterogeneous networks for mobile cloud computing
NASA Astrophysics Data System (ADS)
Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.
2018-04-01
Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.
Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie
2018-02-20
Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Samec, Ronald G.; Su, Wen; Terrell, Dirk; Hube, Douglas P.
1993-01-01
A complete photometric analysis of BVRI Johnson-Cousins photometry of the high northern latitude galactic variable, CE Leo is presented. These observations were taken at Kitt Peak National Observatory on May 31, 1989-June 7, 1989. Three new precise epochs of minimum light were determined and a linear and a quadratic ephemeris were computed from these and previous data covering 28 years of observation. The light curves reveal that the system undergoes a brief 20 min totality in the primary eclipse, indicating that CE Leo is a W UMa W-type binary. A systemic velocity of about -40 km/s was determined. Standard magnitudes were found and a simultaneous solution of the B, V, R, I light curves was computed using the new Wilson-Devinney synthetic light curve code which has the capability of automatically adjusting star spots. The solution indicates that the system consists of two early K-type dwarfs in marginal contact with a fill-out factor less than 3 percent. Evidence for the presence of a large (45 deg radius) superluminous area on the cooler component is given.
Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.
Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B
2013-01-01
A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.
Exploration for porphyry copper deposits in Pakistan using digital processing of Landsat-1 data
NASA Technical Reports Server (NTRS)
Schmidt, R. G.
1976-01-01
Rock-type classification by digital-computer processing of Landsat-1 multispectral scanner data has been used to select 23 prospecting targets in the Chagai District, Pakistan, five of which have proved to be large areas of hydrothermally altered porphyry containing pyrite. Empirical maximum and minimum apparent reflectance limits were selected for each multispectral scanner band in each rock type classified, and a relatively unrefined classification table was prepared. Where the values for all four bands fitted within the limits designated for a particular class, a symbol for the presumed rock type was printed by the computer at the appropriate location. Drainage channels, areas of mineralized quartz diorite, areas of pyrite-rich rock, and the approximate limit of propylitic alteration were very well delineated on the computer-generated map of the test area. The classification method was used to evaluate 2,100 sq km in the Mashki Chah region. The results of the experiment show that outcrops of hydrothermally altered and mineralized rock can be identified from Landsat-1 data under favorable conditions.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and..., Takeoff Minimums and Obstacle DP, Amdt 2 Perham, MN, Perham Muni, RNAV (GPS) RWY 13, Orig Perham, MN, Perham Muni, RNAV (GPS) RWY 31, Amdt 1 Perham, MN, Perham Muni, Takeoff Minimums and Obstacle DP, Amdt 1...
Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.
1987-01-01
The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
2015-11-24
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.49 - Adjustment of benefits under family maximum for change in family group.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for change in family group. 229.49 Section 229.49 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.49 Adjustment of benefits under family maximum for change in family group. (a...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
NASA Astrophysics Data System (ADS)
Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.
2015-12-01
As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, X.; Florinski, V.
We present a new model that couples galactic cosmic-ray (GCR) propagation with magnetic turbulence transport and the MHD background evolution in the heliosphere. The model is applied to the problem of the formation of corotating interaction regions (CIRs) during the last solar minimum from the period between 2007 and 2009. The numerical model simultaneously calculates the large-scale supersonic solar wind properties and its small-scale turbulent content from 0.3 au to the termination shock. Cosmic rays are then transported through the background, and thus computed, with diffusion coefficients derived from the solar wind turbulent properties, using a stochastic Parker approach. Ourmore » results demonstrate that GCR variations depend on the ratio of diffusion coefficients in the fast and slow solar winds. Stream interfaces inside the CIRs always lead to depressions of the GCR intensity. On the other hand, heliospheric current sheet (HCS) crossings do not appreciably affect GCR intensities in the model, which is consistent with the two observations under quiet solar wind conditions. Therefore, variations in diffusion coefficients associated with CIR stream interfaces are more important for GCR propagation than the drift effects of the HCS during a negative solar minimum.« less
Zhu, Hongjun; Feng, Guang; Wang, Qijun
2014-01-01
Accurate prediction of erosion thickness is essential for pipe engineering. The objective of the present paper is to study the temperature distribution in an eroded bend pipe and find a new method to predict the erosion reduced thickness. Computational fluid dynamic (CFD) simulations with FLUENT software are carried out to investigate the temperature field. And effects of oil inlet rate, oil inlet temperature, and erosion reduced thickness are examined. The presence of erosion pit brings about the obvious fluctuation of temperature drop along the extrados of bend. And the minimum temperature drop presents at the most severe erosion point. Small inlet temperature or large inlet velocity can lead to small temperature drop, while shallow erosion pit causes great temperature drop. The dimensionless minimum temperature drop is analyzed and the fitting formula is obtained. Using the formula we can calculate the erosion reduced thickness, which is only needed to monitor the outer surface temperature of bend pipe. This new method can provide useful guidance for pipeline monitoring and replacement. PMID:24719576
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
NASA Technical Reports Server (NTRS)
Snyder, C. T.; Fry, E. B.; Drinkwater, F. J., III; Forrest, R. D.; Scott, B. C.; Benefield, T. D.
1972-01-01
A ground-based simulator investigation was conducted in preparation for and correlation with an-flight simulator program. The objective of these studies was to define minimum acceptable levels of static longitudinal stability for landing approach following stability augmentation systems failures. The airworthiness authorities are presently attempting to establish the requirements for civil transports with only the backup flight control system operating. Using a baseline configuration representative of a large delta wing transport, 20 different configurations, many representing negative static margins, were assessed by three research test pilots in 33 hours of piloted operation. Verification of the baseline model to be used in the TIFS experiment was provided by computed and piloted comparisons with a well-validated reference airplane simulation. Pilot comments and ratings are included, as well as preliminary tracking performance and workload data.
Code of Federal Regulations, 2010 CFR
2010-04-01
... financial reporting and monthly computation by futures commission merchants and introducing brokers. 1.18... UNDER THE COMMODITY EXCHANGE ACT Minimum Financial and Related Reporting Requirements § 1.18 Records for and relating to financial reporting and monthly computation by futures commission merchants and...
25 CFR 542.10 - What are the minimum internal control standards for keno?
Code of Federal Regulations, 2014 CFR
2014-04-01
...) The random number generator shall be linked to the computer system and shall directly relay the... information shall be generated by the computer system. (2) This documentation shall be restricted to... to the computer system shall be adequately restricted (i.e., passwords are changed at least quarterly...
Evaluating Computer Integration in the Elementary School: A Step-by-Step Guide.
ERIC Educational Resources Information Center
Mowe, Richard
This handbook was written to enable elementary school educators to conduct formative evaluations of their computer integrated instruction (CII) programs in minimum time. CII is defined as the use of computer software, such as word processing, database, and graphics programs, to help students solve problems or work more productively. The first…
29 CFR 783.43 - Computation of seaman's minimum wage.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS... STANDARDS ACT TO EMPLOYEES EMPLOYED AS SEAMEN Computation of Wages and Hours § 783.43 Computation of seaman... all hours on duty in such period at the hourly rate prescribed for employees newly covered by the Act...
Code of Federal Regulations, 2011 CFR
2011-04-01
... financial reporting and monthly computation by futures commission merchants and introducing brokers. 1.18... UNDER THE COMMODITY EXCHANGE ACT Minimum Financial and Related Reporting Requirements § 1.18 Records for and relating to financial reporting and monthly computation by futures commission merchants and...
Numerical Computation of Homogeneous Slope Stability
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927
Numerical computation of homogeneous slope stability.
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).
26 CFR 54.4980H-0 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
...) Applicable large employer member. (6) Applicable premium tax credit. (7) Bona fide volunteer. (8) Calendar... for certain employees. (27) Minimum essential coverage. (28) Minimum value. (29) Month. (30) New... measurement method applies, or vice versa. (2) Special rule for certain employees to whom minimum value...
APSIDAL MOTION AND A LIGHT CURVE SOLUTION FOR 13 LMC ECCENTRIC ECLIPSING BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zasche, P.; Wolf, M.; Vraštil, J.
2015-12-15
New CCD observations for 13 eccentric eclipsing binaries from the Large Magellanic Cloud were carried out using the Danish 1.54 m telescope located at the La Silla Observatory in Chile. These systems were observed for their times of minimum and 56 new minima were obtained. These are needed for accurate determination of the apsidal motion. Besides that, in total 436 times of minimum were derived from the photometric databases OGLE and MACHO. The O – C diagrams of minimum timings for these B-type binaries were analyzed and the parameters of the apsidal motion were computed. The light curves of thesemore » systems were fitted using the program PHOEBE, giving the light curve parameters. We derived for the first time relatively short periods of the apsidal motion ranging from 21 to 107 years. The system OGLE-LMC-ECL-07902 was also analyzed using the spectra and radial velocities, resulting in masses of 6.8 and 4.4 M{sub ⊙} for the eclipsing components. For one system (OGLE-LMC-ECL-20112), the third-body hypothesis was also used to describe the residuals after subtraction of the apsidal motion, resulting in a period of about 22 years. For several systems an additional third light was also detected, which makes these systems suspect for triplicity.« less
Predictive minimum description length principle approach to inferring gene regulatory networks.
Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping
2011-01-01
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.
Connectivity ranking of heterogeneous random conductivity models
NASA Astrophysics Data System (ADS)
Rizzo, C. B.; de Barros, F.
2017-12-01
To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, P.; Purdue University, West Lafayette, Indiana 47907; Verma, K.
Borazine is isoelectronic with benzene and is popularly referred to as inorganic benzene. The study of non-covalent interactions with borazine and comparison with its organic counterpart promises to show interesting similarities and differences. The motivation of the present study of the borazine-water interaction, for the first time, stems from such interesting possibilities. Hydrogen-bonded complexes of borazine and water were studied using matrix isolation infrared spectroscopy and quantum chemical calculations. Computations were performed at M06-2X and MP2 levels of theory using 6-311++G(d,p) and aug-cc-pVDZ basis sets. At both the levels of theory, the complex involving an N–H⋯O interaction, where the N–Hmore » of borazine serves as the proton donor to the oxygen of water was found to be the global minimum, in contrast to the benzene-water system, which showed an H–π interaction. The experimentally observed infrared spectra of the complexes corroborated well with our computations for the complex corresponding to the global minimum. In addition to the global minimum, our computations also located two local minima on the borazine-water potential energy surface. Of the two local minima, one corresponded to a structure where the water was the proton donor to the nitrogen of borazine, approaching the borazine ring from above the plane of the ring; a structure that resembled the global minimum in the benzene-water H–π complex. The second local minimum corresponded to an interaction of the oxygen of water with the boron of borazine, which can be termed as the boron bond. Clearly the borazine-water system presents a richer landscape than the benzene-water system.« less
The use of computers in a materials science laboratory
NASA Technical Reports Server (NTRS)
Neville, J. P.
1990-01-01
The objective is to make available a method of easily recording the microstructure of a sample by means of a computer. The method requires a minimum investment and little or no instruction on the operation of a computer. An outline of the setup involving a black and white TV camera, a digitizer control box, a metallurgical microscope and a computer screen, printer, and keyboard is shown.
NASA Astrophysics Data System (ADS)
Lauvergnat, David; Nauts, André; Justum, Yves; Chapuisat, Xavier
2001-04-01
The harmonic adiabatic approximation (HADA), an efficient and accurate quantum method to calculate highly excited vibrational levels of molecular systems, is presented. It is well-suited to applications to "floppy molecules" with a rather large number of atoms (N>3). A clever choice of internal coordinates naturally suggests their separation into active, slow, or large amplitude coordinates q', and inactive, fast, or small amplitude coordinates q″, which leads to an adiabatic (or Born-Oppenheimer-type) approximation (ADA), i.e., the total wave function is expressed as a product of active and inactive total wave functions. However, within the framework of the ADA, potential energy data concerning the inactive coordinates q″ are required. To reduce this need, a minimum energy domain (MED) is defined by minimizing the potential energy surface (PES) for each value of the active variables q', and a quadratic or harmonic expansion of the PES, based on the MED, is used (MED harmonic potential). In other words, the overall picture is that of a harmonic valley about the MED. In the case of only one active variable, we have a minimum energy path (MEP) and a MEP harmonic potential. The combination of the MED harmonic potential and the adiabatic approximation (harmonic adiabatic approximation: HADA) greatly reduces the size of the numerical computations, so that rather large molecules can be studied. In the present article however, the HADA is applied to our benchmark molecule HCN/CNH, to test the validity of the method. Thus, the HADA vibrational energy levels are compared and are in excellent agreement with the ADA calculations (adiabatic approximation with the full PES) of Light and Bačić [J. Chem. Phys. 87, 4008 (1987)]. Furthermore, the exact harmonic results (exact calculations without the adiabatic approximation but with the MEP harmonic potential) are compared to the exact calculations (without any sort of approximation). In addition, we compare the densities of the bending motion during the HCN/CNH isomerization, computed with the HADA and the exact wave function.
Large eddy simulations of a transcritical round jet submitted to transverse acoustic modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Flesca, M.; CNES DLA, 52 Rue Jacques Hillairet, 75612 Paris Cedex; Schmitt, T.
This article reports numerical computations of a turbulent round jet of transcritical fluid (low temperature nitrogen injected under high pressure conditions) surrounded by the same fluid at rest under supercritical conditions (high temperature and high pressure) and submitted to transverse acoustic modulations. The numerical framework relies on large eddy simulation in combination with a real-gas description of thermodynamics and transport properties. A stationary acoustic field is obtained by modulating the normal acoustic velocity at the lateral boundaries of the computational domain. This study specifically focuses on the interaction of the jet with the acoustic field to investigate how the roundmore » transcritical jet changes its shape and mixes with the surrounding fluid. Different modulation amplitudes and frequencies are used to sweep a range of conditions. When the acoustic field is established in the domain, the jet length is notably reduced and the jet is flattened in the spanwise direction. Two regimes of oscillation are identified: for low Strouhal numbers a large amplitude motion is observed, while for higher Strouhal numbers the jet oscillates with a small amplitude around the injector axis. The minimum length is obtained for a Strouhal number of 0.3 and the jet length increases with increasing Strouhal numbers after reaching this minimum value. The mechanism of spanwise deformation is shown to be linked with dynamical effects resulting from reduction of the pressure in the transverse direction in relation with increased velocities on the two sides of the jet. A propagative wave is then introduced in the domain leading to similar effects on the jet, except that a bending is also observed in the acoustic propagation direction. A kinematic model, combining hydrodynamic and acoustic contributions, is derived in a second stage to represent the motion of the jet centerline. This model captures details of the numerical simulations quite well. These various results can serve to interpret observations made on more complex flow configurations such as coaxial jets or jet flames formed by coaxial injectors.« less
MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.
2016-01-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193
MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.
Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T
2016-11-01
Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.
A Computer Analysis of Library Postcards. (CALP)
ERIC Educational Resources Information Center
Stevens, Norman D.
1974-01-01
A description of a sophisticated application of computer techniques to the analysis of a collection of picture postcards of library buildings in an attempt to establish the minimum architectural requirements needed to distinguish one style of library building from another. (Author)
Computer program optimizes design of nuclear radiation shields
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...
Computational studies of the helium-lithium hydride system
NASA Astrophysics Data System (ADS)
Taylor, Brian Keith
2000-12-01
We have computed an ab initio potential energy surface for the He-LiH system. We compute the He- LiH interaction energy at the CCSD(T) level using large correlation consistent atomic basis sets supplemented with bond functions. To capture the severe anisotropy of the He-LH potential, we interpolate our ab initio points in the angular direction with cubic splines, then expand the splines in terms of Legendre polynomials. We have constructed both a He-LiH rigid rotor potential and a complete He-LiH potential where the LiH bond length is allowed to change. The resulting potential surface has a unique shape. The He- LiH rigid rotor colinear geometry has a very attractive minimum of -176.7 cm-1, while the LiH-He colinear geometry has a local minimum of only -9.8 cm -1. Using our computed He-LiH potential energy surface, we investigate the collision dynamics of He-LiH. Using a totally quantum mechanical treatment of collisions dynamics, we compute both pure rotational and rovibrational state-to-state cross sections. We integrate our rovibrational cross sections over a Maxwell-Boltzmann distribution of energies to obtain temperature dependent vibrational excitation and relaxation rate constants. The vibrational excitation rate constants are very small for temperature below 400 K, but become significant at higher temperatures. These results suggests that He-LiH collisions probably were important in the very early Universe, especially in the larger primordial gas clouds. We also investigate the structure and dynamics of small HeN-LiH clusters using diffusion quantum Monte Carlo techniques. We find that three body effects are negligible, so we take the HeN-LiH potential to be a pairwise additive potential; we use the HFD-B3-FCI1 He-He potential of Aziz and Janzen [R. A. Aziz and A. R. Janzen, Phys. Rev. Lett. 74, 1586 (1995)] and our He-LiH potential. Because of the strong He-LiH attraction, one helium is always located in the attractive well at the lithium end of the LiH.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
Developing Digital Immigrants' Computer Literacy: The Case of Unemployed Women
ERIC Educational Resources Information Center
Ktoridou, Despo; Eteokleous-Grigoriou, Nikleia
2011-01-01
Purpose: The purpose of this study is to evaluate the effectiveness of a 40-hour computer course for beginners provided to a group of unemployed women learners with no/minimum computer literacy skills who can be characterized as digital immigrants. The aim of the study is to identify participants' perceptions and experiences regarding technology,…
Analysis and Design of Launch Vehicle Flight Control Systems
NASA Technical Reports Server (NTRS)
Wie, Bong; Du, Wei; Whorton, Mark
2008-01-01
This paper describes the fundamental principles of launch vehicle flight control analysis and design. In particular, the classical concept of "drift-minimum" and "load-minimum" control principles is re-examined and its performance and stability robustness with respect to modeling uncertainties and a gimbal angle constraint is discussed. It is shown that an additional feedback of angle-of-attack or lateral acceleration can significantly improve the overall performance and robustness, especially in the presence of unexpected large wind disturbance. Non-minimum-phase structural filtering of "unstably interacting" bending modes of large flexible launch vehicles is also shown to be effective and robust.
Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.
Huson, Daniel H; Linz, Simone
2018-01-01
A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.
45 CFR 158.210 - Minimum medical loss ratio.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Minimum medical loss ratio. 158.210 Section 158.210 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS... § 158.210 Minimum medical loss ratio. Subject to the provisions of § 158.211 of this subpart: (a) Large...
45 CFR 158.210 - Minimum medical loss ratio.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Minimum medical loss ratio. 158.210 Section 158.210 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS... § 158.210 Minimum medical loss ratio. Subject to the provisions of § 158.211 of this subpart: (a) Large...
45 CFR 158.210 - Minimum medical loss ratio.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Minimum medical loss ratio. 158.210 Section 158.210 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS... § 158.210 Minimum medical loss ratio. Subject to the provisions of § 158.211 of this subpart: (a) Large...
29 CFR 780.301 - Other pertinent statutory provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Employment in Agriculture That Is Exempted From the Minimum Wage and Overtime Pay Requirements Under Section... minimum wage protection (section 6(a)(5)) for agriculture workers for the first time sought to provide a minimum wage floor for the farmworkers on large farms or agri-business enterprises. The section 13(a)(6)(A...
Efficient volume computation for three-dimensional hexahedral cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dukowicz, J.K.
1988-02-01
Currently, algorithms for computing the volume of hexahedral cells with ''ruled'' surfaces require a minimum of 122 FLOPs (floating point operations) per cell. A new algorithm is described which reduces the operation count to 57 FLOPs per cell. copyright 1988 Academic Press, Inc.
Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed
NASA Astrophysics Data System (ADS)
Carrasco, V. M. S.; Vaquero, J. M.
2016-11-01
We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.
Unterhofer, Claudia; Wipplinger, Christoph; Verius, Michael; Recheis, Wolfgang; Thomé, Claudius; Ortler, Martin
Reconstruction of large cranial defects after craniectomy can be accomplished by free-hand poly-methyl-methacrylate (PMMA) or industrially manufactured implants. The free-hand technique often does not achieve satisfactory cosmetic results but is inexpensive. In an attempt to combine the accuracy of specifically manufactured implants with low cost of PMMA. Forty-six consecutive patients with large skull defects after trauma or infection were retrospectively analyzed. The defects were reconstructed using computer-aided design/computer-aided manufacturing (CAD/CAM) techniques. The computer file was imported into a rapid prototyping (RP) machine to produce an acrylonitrile-butadiene-styrene model (ABS) of the patient's bony head. The gas-sterilized model was used as a template for the intraoperative modeling of the PMMA cranioplasty. Thus, not the PMMA implant was generated by CAD/CAM technique but the model of the patients head to easily form a well-fitting implant. Cosmetic outcome was rated on a six-tiered scale by the patients after a minimum follow-up of three months. The mean size of the defect was 74.36cm 2 . The implants fitted well in all patients. Seven patients had a postoperative complication and underwent reoperation. Mean follow-up period was 41 months (range 2-91 months). Results were excellent in 42, good in three and not satisfactory in one patient. Costs per implant were approximately 550 Euros. PMMA implants fabricated in-house by direct molding using a bio-model of the patients bony head are easily produced, fit properly and are inexpensive compared to cranial implants fabricated with other RP or milling techniques. Copyright © 2017 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.; Ku, T. J.
1981-01-01
A two dimensional advanced panel far-field potential flow model of the undistorted, interacting wakes of multiple lifting surfaces was developed which allows the determination of the spanwise bound circulation distribution required for minimum induced drag. This model was implemented in a FORTRAN computer program, the use of which is documented in this report. The nonplanar wakes are broken up into variable sized, flat panels, as chosen by the user. The wake vortex sheet strength is assumed to vary linearly over each of these panels, resulting in a quadratic variation of bound circulation. Panels are infinite in the streamwise direction. The theory is briefly summarized herein; sample results are given for multiple, nonplanar, lifting surfaces, and the use of the computer program is detailed in the appendixes.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
Optimal cube-connected cube multiprocessors
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Wu, Jie
1993-01-01
Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.
Optimal short-range trajectories for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, G.L.; Erzberger, H.
1982-12-01
An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-15
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need... DP, Amdt 2 Alexandria, MN, Chandler Field, RNAV (GPS) RWY 22, Orig Bemidji, MN, Bemidji Rgnl, RNAV (GPS) RWY 25, Orig Granite Falls, MN, Granite Falls Muni/Lenzen-Roe Meml Fld, Takeoff Minimums and...
Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong
2012-03-01
Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
12 CFR 1750.4 - Minimum capital requirement computation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... amounts: (1) 2.50 percent times the aggregate on-balance sheet assets of the Enterprise; (2) 0.45 percent times the unpaid principal balance of mortgage-backed securities and substantially equivalent... current market value of posted qualifying collateral, computed in accordance with appendix A to this...
20 CFR 226.3 - Other regulations related to this part.
Code of Federal Regulations, 2010 CFR
2010-04-01
... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES General § 226.3 Other regulations related to this... primary insurance amounts (PIA's) used in computing the employee, spouse and divorced spouse annuity rates... increased under the social security overall minimum. The creditable service and compensation used in...
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure. PMID:26656107
Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning.
García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor
2015-01-01
This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure.
Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms
NASA Astrophysics Data System (ADS)
Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.
2006-03-01
In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.
An Integrated Crustal Dynamics Simulator
NASA Astrophysics Data System (ADS)
Xing, H. L.; Mora, P.
2007-12-01
Numerical modelling offers an outstanding opportunity to gain an understanding of the crustal dynamics and complex crustal system behaviour. This presentation provides our long-term and ongoing effort on finite element based computational model and software development to simulate the interacting fault system for earthquake forecasting. A R-minimum strategy based finite-element computational model and software tool, PANDAS, for modelling 3-dimensional nonlinear frictional contact behaviour between multiple deformable bodies with the arbitrarily-shaped contact element strategy has been developed by the authors, which builds up a virtual laboratory to simulate interacting fault systems including crustal boundary conditions and various nonlinearities (e.g. from frictional contact, materials, geometry and thermal coupling). It has been successfully applied to large scale computing of the complex nonlinear phenomena in the non-continuum media involving the nonlinear frictional instability, multiple material properties and complex geometries on supercomputers, such as the South Australia (SA) interacting fault system, South California fault model and Sumatra subduction model. It has been also extended and to simulate the hot fractured rock (HFR) geothermal reservoir system in collaboration of Geodynamics Ltd which is constructing the first geothermal reservoir system in Australia and to model the tsunami generation induced by earthquakes. Both are supported by Australian Research Council.
Optimum structural design with plate bending elements - A survey
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1981-01-01
A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.
GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography
NASA Technical Reports Server (NTRS)
Roark, J. H.; Masuoka, C. M.; Frey, H. V.
2004-01-01
GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
New Numerical Approaches To thermal Convection In A Compositionally Stratified Fluid
NASA Astrophysics Data System (ADS)
Puckett, E. G.; Turcotte, D. L.; Kellogg, L. H.; Lokavarapu, H. V.; He, Y.; Robey, J.
2016-12-01
Seismic imaging of the mantle has revealed large and small scale heterogeneities in the lower mantle; specifically structures known as large low shear velocity provinces (LLSVP) below Africa and the South Pacific. Most interpretations propose that the heterogeneities are compositional in nature, differing from the overlying mantle, an interpretation that would be consistent with chemical geodynamic models. The LLSVP's are thought to be very old, meaning they have persisted thoughout much of Earth's history. Numerical modeling of persistent compositional interfaces present challenges to even state-of-the-art numerical methodology. It is extremely difficult to maintain sharp composition boundaries which migrate and distort with time dependent fingering without compositional diffusion and / or artificial diffusion. The compositional boundary must persist indefinitely. In this work we present computations of an initial compositionally stratified fluid that is subject to a thermal gradient ΔT = T1 - T0 across the height D of a rectangular domain over a range of buoyancy numbers B and Rayleigh numbers Ra. In these computations we compare three numerical approaches to modeling the movement of two distinct, thermally driven, compositional fields; namely, a high-order Finte Element Method (FEM) that employs artifical viscosity to preserve the maximum and minimum values of the compositional field, a Discontinous Galerkin (DG) method with a Bound Preserving (BP) limiter, and a Volume-of-Fluid (VOF) interface tracking algorithm. Our computations demonstrate that the FEM approach has far too much numerical diffusion to yield meaningful results, the DGBP method yields much better resuts but with small amounts of each compositional field being (numerically) entrained within the other compositional field, while the VOF method maintains a sharp interface between the two compositions throughout the computation. In the figure we show a comparison of between the three methods for a computation made with B = 1.111 and Ra = 10,000 after the flow has reached 'steady state'. (R) the images computed with the standard FEM method (with artifical viscosity), (C) the images computed with the DGBP method (with no artifical viscosity or diffusion due to discretization errors) and (L) the images computed with the VOF algorithm.
Koltun, G.F.
2013-01-01
This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
NASA Astrophysics Data System (ADS)
Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.
2018-04-01
Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Duchovic, Ronald J.; Rohlfing, Celeste Mcmichael
1989-01-01
Results are reported from CASSCF externally contracted CI ab initio computations of the minimum-energy path for the addition of H to N2. The theoretical basis and numerical implementation of the computations are outlined, and the results are presented in extensive tables and graphs and characterized in detail. The zero-point-corrected barrier for HN2 dissociation is estimated as 8.5 kcal/mol, and the lifetime of the lowest-lying quasi-bound vibrational state of HN2 is found to be between 88 psec and 5.8 nsec (making experimental observation of this species very difficult).
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.
2009-01-01
This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.
NASA Astrophysics Data System (ADS)
Salazar, F. J. T.; Masdemont, J. J.; Gómez, G.; Macau, E. E.; Winter, O. C.
2014-11-01
Assume a constellation of satellites is flying near a given nominal trajectory around L4 or L5 in the Earth-Moon system in such a way that there is some freedom in the selection of the geometry of the constellation. We are interested in avoiding large variations of the mutual distances between spacecraft. In this case, the existence of regions of zero and minimum relative radial acceleration with respect to the nominal trajectory will prevent from the expansion or contraction of the constellation. In the other case, the existence of regions of maximum relative radial acceleration with respect to the nominal trajectory will produce a larger expansion and contraction of the constellation. The goal of this paper is to study these regions in the scenario of the Circular Restricted Three Body Problem by means of a linearization of the equations of motion relative to the periodic orbits around L4 or L5. This study corresponds to a preliminar planar formation flight dynamics about triangular libration points in the Earth-Moon system. Additionally, the cost estimate to maintain the constellation in the regions of zero and minimum relative radial acceleration or keeping a rigid configuration is computed with the use of the residual acceleration concept. At the end, the results are compared with the dynamical behavior of the deviation of the constellation from a periodic orbit.
Simulation of Earth-Moon-Mars Environments for the Assessment of Organ Doses
NASA Astrophysics Data System (ADS)
Kim, M. Y.; Schwadron, N. A.; Townsend, L.; Cucinotta, F. A.
2010-12-01
Space radiation environments for historically large solar particle events (SPE) and galactic cosmic rays (GCR) at solar minimum and solar maximum are simulated in order to characterize exposures to radio-sensitive organs for missions to low-Earth orbit (LEO), moon, and Mars. Primary and secondary particles for SPE and GCR are transported through the respective atmosphere of Earth or Mars, space vehicle, and astronaut’s body tissues using the HZETRN/QMSFRG computer code. In LEO, exposures are reduced compared to deep space because particles are deflected by the Earth’s magnetic field and absorbed by the solid body of the Earth. Geomagnetic transmission function as a function of altitude was applied for the particle flux of charged particles, and the shift of the organ exposures to higher velocity or lower stopping powers compared to those in deep space was analyzed. In the transport through Mars atmosphere, a vertical distribution of atmospheric thickness was calculated from the temperature and pressure data of Mars Global Surveyor, and the directional cosine distribution was implemented to describe the spherically distributed atmospheric distance along the slant path at each altitude. The resultant directional shielding by Mars atmosphere at solar minimum and solar maximum was used for the particle flux simulation at various altitudes on the Martian surface. Finally, atmospheric shielding was coupled with vehicle and body shielding for organ dose estimates. We made predictions of radiation dose equivalents and evaluated acute symptoms at LEO, moon, and Mars at solar minimum and solar maximum.
Parametric study of minimum reactor mass in energy-storage dc-to-dc converters
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Closed-form analytical solutions for the design equations of a minimum-mass reactor for a two-winding voltage-or-current step-up converter are derived. A quantitative relationship between the three parameters - minimum total reactor mass, maximum output power, and switching frequency - is extracted from these analytical solutions. The validity of the closed-form solution is verified by a numerical minimization procedure. A computer-aided design procedure using commercially available toroidal cores and magnet wires is also used to examine how the results from practical designs follow the predictions of the analytical solutions.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and... Three Rivers, MI, Three Rivers Muni Dr. Haines, Takeoff Minimums and Obstacle DP, Orig Brainerd, MN, Brainerd Lakes Rgnl, ILS OR LOC/DME RWY 34, Amdt 1 Park Rapids, MN, Park Rapids Muni-Konshok Field, NDB RWY...
40 CFR 63.5400 - How do I measure the quantity of leather processed?
Code of Federal Regulations, 2012 CFR
2012-07-01
... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...
40 CFR 63.5400 - How do I measure the quantity of leather processed?
Code of Federal Regulations, 2014 CFR
2014-07-01
... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...
40 CFR 63.5400 - How do I measure the quantity of leather processed?
Code of Federal Regulations, 2013 CFR
2013-07-01
... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...
40 CFR 63.5400 - How do I measure the quantity of leather processed?
Code of Federal Regulations, 2011 CFR
2011-07-01
... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...
40 CFR 63.5400 - How do I measure the quantity of leather processed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...
Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P
2011-01-01
Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Simultaneous multislice refocusing via time optimal control.
Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf
2018-02-09
Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.
Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy
Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.
2004-01-01
We studied the microsecond folding dynamics of three β hairpins (Trp zippers 1–3, TZ1–TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1–TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations. PMID:15020773
Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy
NASA Astrophysics Data System (ADS)
Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.
2004-03-01
We studied the microsecond folding dynamics of three hairpins (Trp zippers 1-3, TZ1-TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1-TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations.
Apparatus and method for closed-loop control of reactor power in minimum time
Bernard, Jr., John A.
1988-11-01
Closed-loop control law for altering the power level of nuclear reactors in a safe manner and without overshoot and in minimum time. Apparatus is provided for moving a fast-acting control element such as a control rod or a control drum for altering the nuclear reactor power level. A computer computes at short time intervals either the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e '.rho.-.SIGMA..beta..sub.i (.lambda..sub.i -.lambda..sub.e ')+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e '.omega.] or the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e .rho.-(.lambda..sub.e /.lambda..sub.e)(.beta.-.rho.)+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e .omega.-(.lambda..sub.e /.lambda..sub.e).omega.] These functions each specify the rate of change of reactivity that is necessary to achieve a specified rate of change of reactor power. The direction and speed of motion of the control element is altered so as to provide the rate of reactivity change calculated using either or both of these functions thereby resulting in the attainment of a new power level without overshoot and in minimum time. These functions are computed at intervals of approximately 0.01-1.0 seconds depending on the specific application.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P
2015-01-01
Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.
SAD5 Stereo Correlation Line-Striping in an FPGA
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopoulos, Arin C.
2011-01-01
High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms in addition to SAD5 to be run on the same FPGA.
Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models
Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.
2016-01-01
IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818
NASA Astrophysics Data System (ADS)
Hren, Rok
1998-06-01
Using computer simulations, we systematically investigated the limitations of an inverse solution that employs the potential distribution on the epicardial surface as an equivalent source model in localizing pre-excitation sites in Wolff-Parkinson-White syndrome. A model of the human ventricular myocardium that features an anatomically accurate geometry, an intramural rotating anisotropy and a computational implementation of the excitation process based on electrotonic interactions among cells, was used to simulate body surface potential maps (BSPMs) for 35 pre-excitation sites positioned along the atrioventricular ring. Two individualized torso models were used to account for variations in torso boundaries. Epicardial potential maps (EPMs) were computed using the L-curve inverse solution. The measure for accuracy of the localization was the distance between a position of the minimum in the inverse EPMs and the actual site of pre-excitation in the ventricular model. When the volume conductor properties and lead positions of the torso were precisely known and the measurement noise was added to the simulated BSPMs, the minimum in the inverse EPMs was at 12 ms after the onset on average within
cm of the pre-excitation site. When the standard torso model was used to localize the sites of onset of the pre-excitation sequence initiated in individualized male and female torso models, the mean distance between the minimum and the pre-excitation site was
cm for the male torso and
cm for the female torso. The findings of our study indicate that a location of the minimum in EPMs computed using the inverse solution can offer non-invasive means for pre-interventional planning of the ablative treatment.
Magnetic pattern at supergranulation scale: the void size distribution
NASA Astrophysics Data System (ADS)
Berrilli, F.; Scardigli, S.; Del Moro, D.
2014-08-01
The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.
Entanglement of 3000 atoms by detecting one photon
NASA Astrophysics Data System (ADS)
Vuletic, Vladan
2016-05-01
Quantum-mechanically correlated (entangled) states of many particles are of interest in quantum information, quantum computing and quantum metrology. In particular, entangled states of many particles can be used to overcome limits on measurements performed with ensembles of independent atoms (standard quantum limit). Metrologically useful entangled states of large atomic ensembles (spin squeezed states) have been experimentally realized. These states display Gaussian spin distribution functions with a non-negative Wigner quasiprobability distribution function. We report the generation of entanglement in a large atomic ensemble via an interaction with a very weak laser pulse; remarkably, the detection of a single photon prepares several thousand atoms in an entangled state. We reconstruct a negative-valued Wigner function, and verify an entanglement depth (the minimum number of mutually entangled atoms) that comprises 90% of the atomic ensemble containing 3100 atoms. Further technical improvement should allow the generation of more complex Schrödinger cat states, and of states the overcome the standard quantum limit.
Cosmic Microwave Background Mapmaking with a Messenger Field
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.; Næss, Sigurd K.
2018-01-01
We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.
Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.
Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas
2017-03-01
We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
Application of Twin Beams in Mach-Zehnder Interferometer
NASA Technical Reports Server (NTRS)
Zhang, J. X.; Xie, C. D.; Peng, K. C.
1996-01-01
Using the twin beams generated from parametric amplifier to drive the two port of a Mach-Zehnder interferometer, it is shown that the minimum detectable optical phase shift can be largly reduced to the Heisenberg limit(1/n) which is far below the Shot Noise Limit(1/square root of n) the large gain limit. The dependence of the minimum detectable phase shift on parametric gain and the inefficient photodetectors has been discussed.
Fast secant methods for the iterative solution of large nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Deuflhard, Peter; Freund, Roland; Walter, Artur
1990-01-01
A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.
Reducing tobacco use and access through strengthened minimum price laws.
McLaughlin, Ian; Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-10-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry's discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price "markups" over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements.
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, V.
2012-09-01
As a result of continual space activity since the 1950s, there are now a large number of man-made Resident Space Objects (RSOs) orbiting the Earth. Because of the large number of items and their relative speeds, the possibility of destructive collisions involving important space assets is now of significant concern to users and operators of space-borne technologies. As a result, a growing number of international agencies are researching methods for improving techniques to maintain Space Situational Awareness (SSA). Computer simulation is a method commonly used by many countries to validate competing methodologies prior to full scale adoption. The use of supercomputing and/or reduced scale testing is often necessary to effectively simulate such a complex problem on todays computers. Recently the authors presented a simulation aimed at reducing the computational burden by selecting the minimum level of fidelity necessary for contrasting methodologies and by utilising multi-core CPU parallelism for increased computational efficiency. The resulting simulation runs on a single PC while maintaining the ability to effectively evaluate competing methodologies. Nonetheless, the ability to control the scale and expand upon the computational demands of the sensor management system is limited. In this paper, we examine the advantages of increasing the parallelism of the simulation by means of General Purpose computing on Graphics Processing Units (GPGPU). As many sub-processes pertaining to SSA management are independent, we demonstrate how parallelisation via GPGPU has the potential to significantly enhance not only research into techniques for maintaining SSA, but also to enhance the level of sophistication of existing space surveillance sensors and sensor management systems. Nonetheless, the use of GPGPU imposes certain limitations and adds to the implementation complexity, both of which require consideration to achieve an effective system. We discuss these challenges and how they can be overcome. We further describe an application of the parallelised system where visibility prediction is used to enhance sensor management. This facilitates significant improvement in maximum catalogue error when RSOs become temporarily unobservable. The objective is to demonstrate the enhanced scalability and increased computational capability of the system.
Optimum spaceborne computer system design by simulation
NASA Technical Reports Server (NTRS)
Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.
1973-01-01
A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.
NASA Astrophysics Data System (ADS)
Khan, Akhtar Nawaz
2017-11-01
Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
25 CFR 542.9 - What are the minimum internal control standards for card games?
Code of Federal Regulations, 2013 CFR
2013-04-01
... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...
25 CFR 542.9 - What are the minimum internal control standards for card games?
Code of Federal Regulations, 2012 CFR
2012-04-01
... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...
25 CFR 542.12 - What are the minimum internal control standards for table games?
Code of Federal Regulations, 2010 CFR
2010-04-01
... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...
25 CFR 542.9 - What are the minimum internal control standards for card games?
Code of Federal Regulations, 2014 CFR
2014-04-01
... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...
25 CFR 542.12 - What are the minimum internal control standards for table games?
Code of Federal Regulations, 2014 CFR
2014-04-01
... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...
25 CFR 542.12 - What are the minimum internal control standards for table games?
Code of Federal Regulations, 2011 CFR
2011-04-01
... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...
25 CFR 542.9 - What are the minimum internal control standards for card games?
Code of Federal Regulations, 2011 CFR
2011-04-01
... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...
25 CFR 542.9 - What are the minimum internal control standards for card games?
Code of Federal Regulations, 2010 CFR
2010-04-01
... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...
25 CFR 542.12 - What are the minimum internal control standards for table games?
Code of Federal Regulations, 2012 CFR
2012-04-01
... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...
25 CFR 542.12 - What are the minimum internal control standards for table games?
Code of Federal Regulations, 2013 CFR
2013-04-01
... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...
12 CFR 226.6 - Account-opening disclosures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... compute the finance charge, the range of balances to which it is applicable,11 and the corresponding... required to adjust the range of balances disclosure to reflect the balance below which only a minimum... balance on which the finance charge may be computed. (iv) An explanation of how the amount of any finance...
ERIC Educational Resources Information Center
Texas Education Agency, Austin. Div. of Educational Assessment.
This document lists the objectives for the Texas educational assessment program in mathematics. Eighteen objectives for exit level mathematics are listed, by category: number concepts (4); computation (3); applied computation (5); statistical concepts (3); geometric concepts (2); and algebraic concepts (1). Then general specifications are listed…
Computer optimization of cutting yield from multiple ripped boards
A.R. Stern; K.A. McDonald
1978-01-01
RIPYLD is a computer program that optimizes the cutting yield from multiple-ripped boards. Decisions are based on automatically collected defect information, cutting bill requirements, and sawing variables. The yield of clear cuttings from a board is calculated for every possible permutation of specified rip widths and both the maximum and minimum percent yield...
46 CFR 42.25-20 - Computation for freeboard.
Code of Federal Regulations, 2014 CFR
2014-10-01
... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...
46 CFR 42.25-20 - Computation for freeboard.
Code of Federal Regulations, 2011 CFR
2011-10-01
... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...
46 CFR 42.25-20 - Computation for freeboard.
Code of Federal Regulations, 2010 CFR
2010-10-01
... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...
46 CFR 42.25-20 - Computation for freeboard.
Code of Federal Regulations, 2012 CFR
2012-10-01
... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...
46 CFR 42.25-20 - Computation for freeboard.
Code of Federal Regulations, 2013 CFR
2013-10-01
... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...
Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes
NASA Technical Reports Server (NTRS)
Heiser, P. C.; Ricks, L. O.
1968-01-01
Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.
An Analysis of a Puff Dispersion Model for a Coastal Region.
1982-06-01
gril is determined by computing their movement for a finite time step using a measured wind field. The growth and buoyancy of the puffs are computed...advection step. The grid concentrations can be allowed to accumulate or simply be updated with the lat- est instantaneous value. & minimum gril concentration
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Cerezo, Javier; Santoro, Fabrizio
2016-10-11
Vertical models for the simulation of spectroscopic line shapes expand the potential energy surface (PES) of the final state around the equilibrium geometry of the initial state. These models provide, in principle, a better approximation of the region of the band maximum. At variance, adiabatic models expand each PES around its own minimum. In the harmonic approximation, when the minimum energy structures of the two electronic states are connected by large structural displacements, adiabatic models can breakdown and are outperformed by vertical models. However, the practical application of vertical models faces the issues related to the necessity to perform a frequency analysis at a nonstationary point. In this contribution we revisit vertical models in harmonic approximation adopting both Cartesian (x) and valence internal curvilinear coordinates (s). We show that when x coordinates are used, the vibrational analysis at nonstationary points leads to a deficient description of low-frequency modes, for which spurious imaginary frequencies may even appear. This issue is solved when s coordinates are adopted. It is however necessary to account for the second derivative of s with respect to x, which here we compute analytically. We compare the performance of the vertical model in the s-frame with respect to adiabatic models and previously proposed vertical models in x- or Q 1 -frame, where Q 1 are the normal coordinates of the initial state computed as combination of Cartesian coordinates. We show that for rigid molecules the vertical approach in the s-frame provides a description of the final state very close to the adiabatic picture. For sizable displacements it is a solid alternative to adiabatic models, and it is not affected by the issues of vertical models in x- and Q 1 -frames, which mainly arise when temperature effects are included. In principle the G matrix depends on s, and this creates nonorthogonality problems of the Duschinsky matrix connecting the normal modes of initial and final states in adiabatic approaches. We highlight that such a dependence of G on s is also an issue in vertical models, due to the necessity to approximate the kinetic term in the Hamiltonian when setting up the so-called GF problem. When large structural differences exist between the initial and the final-state minima, the changes in the G matrix can become too large to be disregarded.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hau, L.N.; Wolf, R.A.
A two-dimensional, resistive-MHD computer code is used to investigate the spontaneous reconnection of magnetotaillike configurations. The initial conditions adopted in the simulations are of two types: (1) in which the equatorial normal magnetic field component B{sub ze} declines monotonically down the tail, and (2) in which B{sub ze} exhibits a deep minimum in the near-earth plasma sheet. Configurations of the second type have been suggested by Erickson (1984, 1985) to be the inevitable result of adiabatic, earthward convection of the plasma sheet. To represent the case where the earthward convection stops before the X line forms, i.e., the case wheremore » the interplanetary magnetic field turns northward after a period of southward orientation, the authors impose zero-flow boundary conditions at the edges of the computational box. The initial configurations are in equilibrium and stable within ideal MHD. The dynamic evolution of the system starts after the resistivity is turned on. The main results of the simulations basically support the neutral-line model of substorms and confirm Birn's (1980) computer studies. Specifically, they find spontaneous formation of an X-type neutral point and a single O-type plasmoid with strong tailward flow on the tailward side of the X point. in addition, the results show that the formation of the X point for the configurations of type 2 is clearly associated with the assumed initial B{sub z} minimum. Furthermore, the time interval from trablurning on of the resistivity to the formation of a plasmoid is much shorter in the case where there is an initial deep minimum.« less
Kwiek, Bartłomiej; Ambroziak, Marcin; Osipowicz, Katarzyna; Kowalewski, Cezary; Rożalski, Michał
2018-06-01
Current treatment of facial capillary malformations (CM) has limited efficacy. To assess the efficacy of large spot 532 nm lasers for the treatment of previously treated facial CM with the use of 3-dimensional (3D) image analysis. Forty-three white patients aged 6 to 59 were included in this study. Patients had 3D photography performed before and after treatment with a 532 nm Nd:YAG laser with large spot and contact cooling. Objective analysis of percentage improvement based on 3D digital assessment of combined color and area improvement (global clearance effect [GCE]) were performed. The median maximal improvement achieved during the treatment (GCE) was 59.1%. The mean number of laser procedures required to achieve this improvement was 6.2 (range 1-16). Improvement of minimum 25% (GCE25) was achieved by 88.4% of patients, a minimum of 50% (GCE50) by 61.1%, a minimum of 75% (GCE75) by 25.6%, and a minimum of 90% (GCE90) by 4.6%. Patients previously treated with pulsed dye lasers had a significantly less response than those treated with other modalities (GCE 37.3% vs 61.8%, respectively). A large spot 532 nm laser is effective in previously treated patients with facial CM.
C-semiring Frameworks for Minimum Spanning Tree Problems
NASA Astrophysics Data System (ADS)
Bistarelli, Stefano; Santini, Francesco
In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.
NASA Technical Reports Server (NTRS)
Rivera, J. M.; Simpson, R. W.
1980-01-01
The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.
Near-Field Magnetic Dipole Moment Analysis
NASA Technical Reports Server (NTRS)
Harris, Patrick K.
2003-01-01
This paper describes the data analysis technique used for magnetic testing at the NASA Goddard Space Flight Center (GSFC). Excellent results have been obtained using this technique to convert a spacecraft s measured magnetic field data into its respective magnetic dipole moment model. The model is most accurate with the earth s geomagnetic field cancelled in a spherical region bounded by the measurement magnetometers with a minimum radius large enough to enclose the magnetic source. Considerably enhanced spacecraft magnetic testing is offered by using this technique in conjunction with a computer-controlled magnetic field measurement system. Such a system, with real-time magnetic field display capabilities, has been incorporated into other existing magnetic measurement facilities and is also used at remote locations where transport to a magnetics test facility is impractical.
Transonic cascade flow calculations using non-periodic C-type grids
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1991-01-01
A new kind of C-type grid is proposed for turbomachinery flow calculations. This grid is nonperiodic on the wake and results in minimum skewness for cascades with high turning and large camber. Euler and Reynolds averaged Navier-Stokes equations are discretized on this type of grid using a finite volume approach. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. Jameson's explicit Runge-Kutta scheme is adopted for the integration in time, and computational efficiency is achieved through accelerating strategies such as multigriding and residual smoothing. A detailed numerical study was performed for a turbine rotor and for a vane. A grid dependence analysis is presented and the effect of artificial dissipation is also investigated. Comparison of calculations with experiments clearly demonstrates the advantage of the proposed grid.
Ultralow Thermal Conductivity in Full Heusler Semiconductors.
He, Jiangang; Amsler, Maximilian; Xia, Yi; Naghavi, S Shahab; Hegde, Vinay I; Hao, Shiqiang; Goedecker, Stefan; Ozoliņš, Vidvuds; Wolverton, Chris
2016-07-22
Semiconducting half and, to a lesser extent, full Heusler compounds are promising thermoelectric materials due to their compelling electronic properties with large power factors. However, intrinsically high thermal conductivity resulting in a limited thermoelectric efficiency has so far impeded their widespread use in practical applications. Here, we report the computational discovery of a class of hitherto unknown stable semiconducting full Heusler compounds with ten valence electrons (X_{2}YZ, X=Ca, Sr, and Ba; Y=Au and Hg; Z=Sn, Pb, As, Sb, and Bi) through high-throughput ab initio screening. These new compounds exhibit ultralow lattice thermal conductivity κ_{L} close to the theoretical minimum due to strong anharmonic rattling of the heavy noble metals, while preserving high power factors, thus resulting in excellent phonon-glass electron-crystal materials.
Computer search for binary cyclic UEP codes of odd length up to 65
NASA Technical Reports Server (NTRS)
Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu
1990-01-01
Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.
Energy star. (Latest citations from the Computer database). Published Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 81 citations and includes a subject term index and title list.)
Energy star. (Latest citations from the Computer database). Published Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 234 citations and includes a subject term index and title list.)
Reducing Tobacco Use and Access Through Strengthened Minimum Price Laws
Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-01-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry’s discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price “markups” over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements. PMID:25121820
Minimum-domain impulse theory for unsteady aerodynamic force
NASA Astrophysics Data System (ADS)
Kang, L. L.; Liu, L. Q.; Su, W. D.; Wu, J. Z.
2018-01-01
We extend the impulse theory for unsteady aerodynamics from its classic global form to finite-domain formulation then to minimum-domain form and from incompressible to compressible flows. For incompressible flow, the minimum-domain impulse theory raises the finding of Li and Lu ["Force and power of flapping plates in a fluid," J. Fluid Mech. 712, 598-613 (2012)] to a theorem: The entire force with discrete wake is completely determined by only the time rate of impulse of those vortical structures still connecting to the body, along with the Lamb-vector integral thereof that captures the contribution of all the rest disconnected vortical structures. For compressible flows, we find that the global form in terms of the curl of momentum ∇ × (ρu), obtained by Huang [Unsteady Vortical Aerodynamics (Shanghai Jiaotong University Press, 1994)], can be generalized to having an arbitrary finite domain, but the formula is cumbersome and in general ∇ × (ρu) no longer has discrete structures and hence no minimum-domain theory exists. Nevertheless, as the measure of transverse process only, the unsteady field of vorticity ω or ρω may still have a discrete wake. This leads to a minimum-domain compressible vorticity-moment theory in terms of ρω (but it is beyond the classic concept of impulse). These new findings and applications have been confirmed by our numerical experiments. The results not only open an avenue to combine the theory with computation-experiment in wide applications but also reveal a physical truth that it is no longer necessary to account for all wake vortical structures in computing the force and moment.
van Iersel, Leo; Kelk, Steven; Lekić, Nela; Scornavacca, Celine
2014-05-05
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data.
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1973-01-01
The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Local sharpening and subspace wavefront correction with predictive dynamic digital holography
NASA Astrophysics Data System (ADS)
Sulaiman, Sennan; Gibson, Steve
2017-09-01
Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.
HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munro, J.K. Jr.
1980-05-01
The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values formore » display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California.« less
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
ERIC Educational Resources Information Center
Hadi, Marham Jupri
2013-01-01
Researcher's observation on his ESL class indicates the main issues concerning the writing skills: learners' low motivation to write, minimum interaction in writing, and poor writing skills. These limitations have led them to be less confidence to write in English. This article discusses how computers can be used for the purpose of increasing…
29 CFR Appendix B to Part 510 - Nonmanufacturing Industries Eligible for Minimum Wage Phase-In
Code of Federal Regulations, 2010 CFR
2010-07-01
.... 7374 1 Computer processing and data preparation and processing services. 7379 1 Computer related... industries (except those in major groups 01, 02, 08, and 09, pertaining to agriculture) for which data were... incorporated by reference in these regulations (§ 510.21). The data in this appendix are presented by major...
29 CFR Appendix B to Part 510 - Nonmanufacturing Industries Eligible for Minimum Wage Phase-In
Code of Federal Regulations, 2011 CFR
2011-07-01
.... 7374 1 Computer processing and data preparation and processing services. 7379 1 Computer related... industries (except those in major groups 01, 02, 08, and 09, pertaining to agriculture) for which data were... incorporated by reference in these regulations (§ 510.21). The data in this appendix are presented by major...
Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N
2011-01-01
Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
Design and fabrication of a boron reinforced intertank skirt
NASA Technical Reports Server (NTRS)
Henshaw, J.; Roy, P. A.; Pylypetz, P.
1974-01-01
Analytical and experimental studies were performed to evaluate the structural efficiency of a boron reinforced shell, where the medium of reinforcement consists of hollow aluminum extrusions infiltrated with boron epoxy. Studies were completed for the design of a one-half scale minimum weight shell using boron reinforced stringers and boron reinforced rings. Parametric and iterative studies were completed for the design of minimum weight stringers, rings, shells without rings and shells with rings. Computer studies were completed for the final evaluation of a minimum weight shell using highly buckled minimum gage skin. The detail design is described of a practical minimum weight test shell which demonstrates a weight savings of 30% as compared to an all aluminum longitudinal stiffened shell. Sub-element tests were conducted on representative segments of the compression surface at maximum stress and also on segments of the load transfer joint. A 10 foot long, 77 inch diameter shell was fabricated from the design and delivered for further testing.
Quantum proofs can be verified using only single-qubit measurements
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Nagaj, Daniel; Schuch, Norbert
2016-02-01
Quantum Merlin Arthur (QMA) is the class of problems which, though potentially hard to solve, have a quantum solution that can be verified efficiently using a quantum computer. It thus forms a natural quantum version of the classical complexity class NP (and its probabilistic variant MA, Merlin-Arthur games), where the verifier has only classical computational resources. In this paper, we study what happens when we restrict the quantum resources of the verifier to the bare minimum: individual measurements on single qubits received as they come, one by one. We find that despite this grave restriction, it is still possible to soundly verify any problem in QMA for the verifier with the minimum quantum resources possible, without using any quantum memory or multiqubit operations. We provide two independent proofs of this fact, based on measurement-based quantum computation and the local Hamiltonian problem. The former construction also applies to QMA1, i.e., QMA with one-sided error.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Cip, Johannes; Widemschek, Mark; Luegmair, Matthias; Sheinkop, Mitchell B; Benesch, Thomas; Martin, Arno
2014-09-01
In the literature, studies of computer-assisted total knee arthroplasty (TKA) after mid-term period are not conclusive and long-term data are rare. In a prospective, randomized, comparative study 100 conventional TKAs (group REG) were compared with 100 computer-assisted TKAs (group NAV). Minimum follow-up was 5years. No difference in implant failure was found with 1.1% in group NAV versus 4.6% in group REG (P=0.368). Group NAV showed a significantly less mean deviation of mechanical limb axis (P=0.015), more TKAs (90% versus 81% in group REG) were within 3° varus/valgus and a higher tibial slope and lateral distal femoral angle (LDFA) accuracy was found (P≤0.034). Clinical investigational parameters showed no differences (P≥0.058). Insall and HSS score total were also higher in group NAV (P≤0.016). Copyright © 2014 Elsevier Inc. All rights reserved.
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
Remote observing with NASA's Deep Space Network
NASA Astrophysics Data System (ADS)
Kuiper, T. B. H.; Majid, W. A.; Martinez, S.; Garcia-Miro, C.; Rizzo, J. R.
2012-09-01
The Deep Space Network (DSN) communicates with spacecraft as far away as the boundary between the Solar System and the interstellar medium. To make this possible, large sensitive antennas at Canberra, Australia, Goldstone, California, and Madrid, Spain, provide for constant communication with interplanetary missions. We describe the procedures for radioastronomical observations using this network. Remote access to science monitor and control computers by authorized observers is provided by two-factor authentication through a gateway at the Jet Propulsion Laboratory (JPL) in Pasadena. To make such observations practical, we have devised schemes based on SSH tunnels and distributed computing. At the very minimum, one can use SSH tunnels and VNC (Virtual Network Computing, a remote desktop software suite) to control the science hosts within the DSN Flight Operations network. In this way we have controlled up to three telescopes simultaneously. However, X-window updates can be slow and there are issues involving incompatible screen sizes and multi-screen displays. Consequently, we are now developing SSH tunnel-based schemes in which instrument control and monitoring, and intense data processing, are done on-site by the remote DSN hosts while data manipulation and graphical display are done at the observer's host. We describe our approaches to various challenges, our experience with what worked well and lessons learned, and directions for future development.
A hardware implementation of the discrete Pascal transform for image processing
NASA Astrophysics Data System (ADS)
Goodman, Thomas J.; Aburdene, Maurice F.
2006-02-01
The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.
NASA Astrophysics Data System (ADS)
Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François
2018-04-01
Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.
Computed Potential Energy Surfaces and Minimum Energy Pathway for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)
1994-01-01
Computed potential energy surfaces are often required for computation of such observables as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method with the Dunning correlation consistent basis sets to obtain accurate energetics, gives useful results for a number of chemically important systems. Applications to complex reactions leading to NO and soot formation in hydrocarbon combustion are discussed.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
A root-mean-square approach for predicting fatigue crack growth under random loading
NASA Technical Reports Server (NTRS)
Hudson, C. M.
1981-01-01
A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
Singular perturbation techniques for real time aircraft trajectory optimization and control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1982-01-01
The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.
When Gravity Fails: Local Search Topology
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Cheeseman, Peter; Stutz, John; Lau, Sonie (Technical Monitor)
1997-01-01
Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called {\\em plateau moves), dominate the time spent in local search. We analyze and characterize {\\em plateaus) for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e. global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
Kaneta, Tomohiro; Ogawa, Matsuyoshi; Motomura, Nobutoku; Iizuka, Hitoshi; Arisawa, Tetsu; Hino-Shishikura, Ayako; Yoshida, Keisuke; Inoue, Tomio
2017-10-11
The goal of this study was to evaluate the performance of the Celesteion positron emission tomography/computed tomography (PET/CT) scanner, which is characterized by a large-bore and time-of-flight (TOF) function, in accordance with the NEMA NU-2 2012 standard and version 2.0 of the Japanese guideline for oncology fluorodeoxyglucose PET/CT data acquisition protocol. Spatial resolution, sensitivity, count rate characteristic, scatter fraction, energy resolution, TOF timing resolution, and image quality were evaluated according to the NEMA NU-2 2012 standard. Phantom experiments were performed using 18 F-solution and an IEC body phantom of the type described in the NEMA NU-2 2012 standard. The minimum scanning time required for the detection of a 10-mm hot sphere with a 4:1 target-to-background ratio, the phantom noise equivalent count (NEC phantom ), % background variability (N 10mm ), % contrast (Q H,10mm ), and recovery coefficient (RC) were calculated according to the Japanese guideline. The measured spatial resolution ranged from 4.5- to 5-mm full width at half maximum (FWHM). The sensitivity and scatter fraction were 3.8 cps/kBq and 37.3%, respectively. The peak noise-equivalent count rate was 70 kcps in the presence of 29.6 kBq mL -1 in the phantom. The system energy resolution was 12.4% and the TOF timing resolution was 411 ps at FWHM. Minimum scanning times of 2, 7, 6, and 2 min per bed position, respectively, are recommended for visual score, noise-equivalent count (NEC) phantom , N 10mm , and the Q H,10mm to N 10mm ratio (QNR) by the Japanese guideline. The RC of a 10-mm-diameter sphere was 0.49, which exceeded the minimum recommended value. The Celesteion large-bore PET/CT system had low sensitivity and NEC, but good spatial and time resolution when compared to other PET/CT scanners. The QNR met the recommended values of the Japanese guideline even at 2 min. The Celesteion is therefore thought to provide acceptable image quality with 2 min/bed position acquisition, which is the most common scan protocol in Japan.
Diamond, Alan; Nowotny, Thomas; Schmuker, Michael
2016-01-01
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950
Parallel computation of GA search for the artery shape determinants with CFD
NASA Astrophysics Data System (ADS)
Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.
2010-06-01
We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
75 FR 60333 - Hazardous Material; Miscellaneous Packaging Amendments
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... minimum thickness requirements for remanufactured steel and plastic drums; (2) reinstate the previous... communication problem for emergency responders in that it may interfere with them discovering a large amount of... prescribed in Sec. 178.2(c). D. Minimum Thickness Requirement for Remanufactured Steel and Plastic Drums...
Recent Studies of the Behavior of the Sun's White-Light Corona Over Time
NASA Technical Reports Server (NTRS)
SaintCyr, O. C.; Young, D. E.; Pesnell, W. D.; Lecinski, A.; Eddy, J.
2008-01-01
Predictions of upcoming solar cycles are often related to the nature and dynamics of the Sun's polar magnetic field and its influence on the corona. For the past 30 years we have a more-or-less continuous record of the Sun's white-light corona from groundbased and spacebased coronagraphs. Over that interval, the large scale features of the corona have varied in what we now consider a 'predictable' fashion--complex, showing multiple streamers at all latitudes during solar activity maximum; and a simple dipolar shape aligned with the rotational pole during solar minimum. Over the past three decades the white-light corona appears to be a better indicator of 'true' solar minimum than sunspot number since sunspots disappear for months (even years) at solar minimum. Since almost all predictions of the timing of the next solar maximum depend on the timing of solar minimum, the white-light corona is a potentially important observational discriminator for future predictors. In this contribution we describe recent work quantifying the large-scale appearance of the Sun's corona to correlate it with the sunspot record, especially around solar minimum. These three decades can be expanded with the HAO archive of eclipse photographs which, although sparse compared to the coronagraphic coverage, extends back to 1869. A more extensive understanding of this proxy would give researchers confidence in using the white-light corona as an indicator of solar minimum conditions.
Koltun, G.F.
2014-01-01
This report presents the results of a study to assess potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data (where available) and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for Charles Mill, Clendening, and Piedmont Lakes to 74 calendar years for Pleasant Hill, Senecaville, and Wills Creek Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate typically increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.
Documentation of a deep percolation model for estimating ground-water recharge
Bauer, H.H.; Vaccaro, J.J.
1987-01-01
A deep percolation model, which operates on a daily basis, was developed to estimate long-term average groundwater recharge from precipitation. It has been designed primarily to simulate recharge in large areas with variable weather, soils, and land uses, but it can also be used at any scale. The physical and mathematical concepts of the deep percolation model, its subroutines and data requirements, and input data sequence and formats are documented. The physical processes simulated are soil moisture accumulation, evaporation from bare soil, plant transpiration, surface water runoff, snow accumulation and melt, and accumulation and evaporation of intercepted precipitation. The minimum data sets for the operation of the model are daily values of precipitation and maximum and minimum air temperature, soil thickness and available water capacity, soil texture, and land use. Long-term average annual precipitation, actual daily stream discharge, monthly estimates of base flow, Soil Conservation Service surface runoff curve numbers, land surface altitude-slope-aspect, and temperature lapse rates are optional. The program is written in the FORTRAN 77 language with no enhancements and should run on most computer systems without modifications. Documentation has been prepared so that program modifications may be made for inclusions of additional physical processes or deletion of ones not considered important. (Author 's abstract)
A superconducting magnet mandrel with minimum symmetry laminations for proton therapy
NASA Astrophysics Data System (ADS)
Caspi, S.; Arbelaez, D.; Brouwer, L.; Dietderich, D. R.; Felice, H.; Hafalia, R.; Prestemon, S.; Robin, D.; Sun, C.; Wan, W.
2013-08-01
The size and weight of ion-beam cancer therapy gantries are frequently determined by a large aperture, curved, ninety degree, dipole magnet. The higher fields achievable with superconducting technology promise to greatly reduce the size and weight of this magnet and therefore also the gantry as a whole. This paper reports advances in the design of winding mandrels for curved, canted cosine-theta (CCT) magnets in the context of a preliminary magnet design for a proton gantry. The winding mandrel is integral to the CCT design and significantly affects the construction cost, stress management, winding feasibility, eddy current power losses, and field quality of the magnet. A laminated mandrel design using a minimum symmetry in the winding path is introduced and its feasibility demonstrated by a rapid prototype model. Piecewise construction of the mandrel using this laminated approach allows for increased manufacturing techniques and material choices. Sectioning the mandrel also reduces eddy currents produced during field changes accommodating the scan of beam energies during treatment. This symmetry concept can also greatly reduce the computational resources needed for 3D finite element calculations. It is shown that the small region of symmetry forming the laminations combined with periodic boundary conditions can model the entire magnet geometry disregarding the ends.
Optimal shield mass distribution for space radiation protection
NASA Technical Reports Server (NTRS)
Billings, M. P.
1972-01-01
Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.
Using Testbanking To Implement Classroom Management/Extension through the Use of Computers.
ERIC Educational Resources Information Center
Thommen, John D.
Testbanking provides teachers with an effective, low-cost, time-saving opportunity to improve the testing aspect of their classes. Testbanking, which involves the use of a testbank program and a computer, allows teachers to develop and generate tests and test-forms with a minimum of effort. Teachers who test using true and false, multiple choice,…
14 CFR Appendix E to Part 125 - Airplane Flight Recorder Specifications
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Airplane Flight Recorder Specifications E... air data computer when practicable. 3. Indicated airspeed or Calibrated airspeed 50 KIAS or minimum value to Max Vso, to 1.2 V.D ±5% and ±3% 1 1 kt Data should be obtained from the air data computer when...
14 CFR Appendix E to Part 125 - Airplane Flight Recorder Specifications
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Airplane Flight Recorder Specifications E... air data computer when practicable. 3. Indicated airspeed or Calibrated airspeed 50 KIAS or minimum value to Max Vso, to 1.2 V.D ±5% and ±3% 1 1 kt Data should be obtained from the air data computer when...
14 CFR Appendix E to Part 125 - Airplane Flight Recorder Specifications
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Airplane Flight Recorder Specifications E... air data computer when practicable. 3. Indicated airspeed or Calibrated airspeed 50 KIAS or minimum value to Max Vso, to 1.2 V.D ±5% and ±3% 1 1 kt Data should be obtained from the air data computer when...
14 CFR Appendix E to Part 125 - Airplane Flight Recorder Specifications
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Airplane Flight Recorder Specifications E... air data computer when practicable. 3. Indicated airspeed or Calibrated airspeed 50 KIAS or minimum value to Max Vso, to 1.2 V.D ±5% and ±3% 1 1 kt Data should be obtained from the air data computer when...
ERIC Educational Resources Information Center
Lund, David M.; Hildreth, Donna
A case study investigated an instructional model that incorporated the personal computer and Hyperstudio (tm) software into an assignment to write and illustrate an interactive, multimedia story. Subjects were 21 students in a fifth-grade homeroom in a public school (with a state-mandated minimum 45% ratio of minority students achieved by busing…
Excitation of nucleobases from a computational perspective I: reaction paths.
Giussani, Angelo; Segarra-Martí, Javier; Roca-Sanjuán, Daniel; Merchán, Manuela
2015-01-01
The main intrinsic photochemical events in nucleobases can be described on theoretical grounds within the realm of non-adiabatic computational photochemistry. From a static standpoint, the photochemical reaction path approach (PRPA), through the computation of the respective minimum energy path (MEP), can be regarded as the most suitable strategy in order to explore the electronically excited isolated nucleobases. Unfortunately, the PRPA does not appear widely in the studies reported in the last decade. The main ultrafast decay observed experimentally for the gas-phase excited nucleobases is related to the computed barrierless MEPs from the bright excited state connecting the initial Franck-Condon region and a conical intersection involving the ground state. At the highest level of theory currently available (CASPT2//CASPT2), the lowest excited (1)(ππ*) hypersurface for cytosine has a shallow minimum along the MEP deactivation pathway. In any case, the internal conversion processes in all the natural nucleobases are attained by means of interstate crossings, a self-protection mechanism that prevents the occurrence of photoinduced damage of nucleobases by ultraviolet radiation. Many alternative and secondary paths have been proposed in the literature, which ultimately provide a rich and constructive interplay between experimentally and theoretically oriented research.
The evaluation of alternate methodologies for land cover classification in an urbanizing area
NASA Technical Reports Server (NTRS)
Smekofski, R. M.
1981-01-01
The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.
NASA Astrophysics Data System (ADS)
Alemadi, Nasser Ahmed
Deregulation has brought opportunities for increasing efficiency of production and delivery and reduced costs to customers. Deregulation has also bought great challenges to provide the reliability and security customers have come to expect and demand from the electrical delivery system. One of the challenges in the deregulated power system is voltage instability. Voltage instability has become the principal constraint on power system operation for many utilities. Voltage instability is a unique problem because it can produce an uncontrollable, cascading instability that results in blackout for a large region or an entire country. In this work we define a system of advanced analytical methods and tools for secure and efficient operation of the power system in the deregulated environment. The work consists of two modules; (a) contingency selection module and (b) a Security Constrained Optimization module. The contingency selection module to be used for voltage instability is the Voltage Stability Security Assessment and Diagnosis (VSSAD). VSSAD shows that each voltage control area and its reactive reserve basin describe a subsystem or agent that has a unique voltage instability problem. VSSAD identifies each such agent. VS SAD is to assess proximity to voltage instability for each agent and rank voltage instability agents for each contingency simulated. Contingency selection and ranking for each agent is also performed. Diagnosis of where, why, when, and what can be done to cure voltage instability for each equipment outage and transaction change combination that has no load flow solution is also performed. A security constrained optimization module developed solves a minimum control solvability problem. A minimum control solvability problem obtains the reactive reserves through action of voltage control devices that VSSAD determines are needed in each agent to obtain solution of the load flow. VSSAD makes a physically impossible recommendation of adding reactive generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.
Apparatus and method for classifying fuel pellets for nuclear reactor
Wilks, Robert S.; Sternheim, Eliezer; Breakey, Gerald A.; Sturges, Jr., Robert H.; Taleff, Alexander; Castner, Raymond P.
1984-01-01
Control for the operation of a mechanical handling and gauging system for nuclear fuel pellets. The pellets are inspected for diameters, lengths, surface flaws and weights in successive stations. The control includes, a computer for commanding the operation of the system and its electronics and for storing and processing the complex data derived at the required high rate. In measuring the diameter, the computer enables the measurement of a calibration pellet, stores that calibration data and computes and stores diameter-correction factors and their addresses along a pellet. To each diameter measurement a correction factor is applied at the appropriate address. The computer commands verification that all critical parts of the system and control are set for inspection and that each pellet is positioned for inspection. During each cycle of inspection, the measurement operation proceeds normally irrespective of whether or not a pellet is present in each station. If a pellet is not positioned in a station, a measurement is recorded, but the recorded measurement indicates maloperation. In measuring diameter and length a light pattern including successive shadows of slices transverse for diameter or longitudinal for length are projected on a photodiode array. The light pattern is scanned electronically by a train of pulses. The pulses are counted during the scan of the lighted diodes. For evaluation of diameter the maximum diameter count and the number of slices for which the diameter exceeds a predetermined minimum is determined. For acceptance, the maximum must be less than a maximum level and the minimum must exceed a set number. For evaluation of length, the maximum length is determined. For acceptance, the length must be within maximum and minimum limits.
Computational Role of Tunneling in a Programmable Quantum Annealer
NASA Technical Reports Server (NTRS)
Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut
2016-01-01
Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.
NASA Technical Reports Server (NTRS)
Svalbonas, V.
1973-01-01
The User's manual for the shell theory automated for rotational structures (STARS) 2B and 2V (buckling, vibrations) is presented. Several features of the program are: (1) arbitrary branching of the shell meridians, (2) arbitrary boundary conditions, (3) minimum input requirements to describe a complex, practical shell of revolution structure, and (4) accurate analysis capability using a minimum number of degrees of freedom.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
On bound-states of the Gross Neveu model with massive fundamental fermions
NASA Astrophysics Data System (ADS)
Frishman, Yitzhak; Sonnenschein, Jacob
2018-01-01
In the search for QFT's that admit boundstates, we reinvestigate the two dimensional Gross-Neveu model, but with massive fermions. By computing the self-energy for the auxiliary boundstate field and the effective potential, we show that there are no bound states around the lowest minimum, but there is a meta-stable bound state around the other minimum, a local one. The latter decays by tunneling. We determine the dependence of its lifetime on the fermion mass and coupling constant.
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
Minimum Energy Pathways for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, S. P.; Langhoff, S. R. (Technical Monitor)
1995-01-01
Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives useful results for a number of chemically important systems. The talk will focus on a number of applications to reactions leading to NOx and soot formation in hydrocarbon combustion.
Neonatal records and the computer.
Walker, C H
1977-01-01
To use a combined single document clinical case sheet/computer record which can form the basic document for a life medical record is a practical proposition. With adequate briefing doctors and nurses soon become familiar with the record and appreciate its value. Secretarial and clerical requirements are reduced to a minimum as transcription of medical data is eliminated, so greatly speeding up processing and feed back to the medical services. A few illustrations of trends in neonatal statistics and of computer linked maternal/neonatal data are presented. PMID:879830
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...
NASA Astrophysics Data System (ADS)
Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao
In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.
Generation of structural topologies using efficient technique based on sorted compliances
NASA Astrophysics Data System (ADS)
Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan
2018-01-01
Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.
Good initialization model with constrained body structure for scene text recognition
NASA Astrophysics Data System (ADS)
Zhu, Anna; Wang, Guoyou; Dong, Yangbo
2016-09-01
Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.
NASA Technical Reports Server (NTRS)
1979-01-01
A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Test Duration for Water Intake, Average Daily Gain, and Dry Matter Intake in Beef Cattle.
Ahlberg, C M; Allwardt, K; Broocks, A; Bruno, K; McPhillips, L; Taylor, A; Krehbiel, C R; Calvo-Lorenzo, M; Richards, C J; Place, S E; DeSilva, U; VanOverbeke, D L; Mateescu, R G; Kuehn, L A; Weaber, R L; Bormann, J M; Rolf, M M
2018-05-22
Water is an essential nutrient, but the effect it has on performance generally receives little attention. There are few systems and guidelines for collection of water intake phenotypes in beef cattle, which makes large-scale research on water intake a challenge. The Beef Improvement Federation has established guidelines for feed intake and average daily gain tests, but no guidelines exist for water intake. The goal of this study was to determine the test duration necessary for collection of accurate water intake phenotypes. To facilitate this goal, individual daily water intake (WI) and feed intake (FI) records were collected on 578 crossbred steers for a total of 70 d using an Insentec system at the Oklahoma State University Willard Sparks Beef Research Unit. Steers were fed in 5 groups and were individually weighed every 14 days. Within each group, steers were blocked by body weight (low and high) and randomly assigned to 1 of 4 pens containing approximately 30 steers per pen. Each pen provided 103.0 m2 of shade and included an Insentec system containing 6 feed bunks and 1 water bunk. Steers were fed a constant diet across groups and dry matter intake was calculated using the average of weekly percent dry matter within group. Average feed and water intakes for each animal were computed for increasingly large test durations (7, 14, 21, 28, 35, 42, 49, 56, 63 and 70 d), and ADG was calculated using a regression formed from body weights (BW) taken every14 d (0, 14, 28, 42, 56, and 70 d). Intervals for all traits were computed starting from both the beginning (d 0) and the end of the testing period (d 70). Pearson and Spearman correlations were computed for phenotypes from each shortened test period and for the full 70-d test. Minimum test duration was determined when the Pearson correlations were greater than 0.95 for each trait. Our results indicated that minimum test duration for WI, DMI, and ADG were 35, 42, and 70 d, respectively. No comparable studies exist for WI; however, our results for FI and ADG are consistent with those in the literature. Although further testing in other populations of cattle and areas of the country should take place, our results suggest that WI phenotypes can be collected concurrently with DMI, without extending test duration, even if following procedures for decoupled intake and gain tests.
NASA Astrophysics Data System (ADS)
Poon, Eric; Thondapu, Vikas; Barlis, Peter; Ooi, Andrew
2017-11-01
Coronary artery disease remains a major cause of mortality in developed countries, and is most often due to a localized flow-limiting stenosis, or narrowing, of coronary arteries. Patients often undergo invasive procedures such as X-ray angiography and fractional flow reserve to diagnose flow-limiting lesions. Even though such diagnostic techniques are well-developed, the effects of diseased coronary segments on local flow are still poorly understood. Therefore, this study investigated the effect of irregular geometries of diseased coronary segments on the macro-recirculation and local pressure minimum regions. We employed an idealized coronary artery model with a diameter of stenosis of 75%. By systematically adjusting the eccentricity and the asymmetry of the coronary stenosis, we uncovered an increase in macro-recirculation size. Most importantly, the presence of this macro-recirculation signifies a local pressure minimum (identified by λ2 vortex identification method). This local pressure minimum has a profound effect on the pressure drops in both longitudinal and planar directions, which has implications for diagnosis and treatment of coronary artery disease. Supported by Australian Research Council LP150100233 and National Computational Infrastructure m45.
Support for User Interfaces for Distributed Systems
NASA Technical Reports Server (NTRS)
Eychaner, Glenn; Niessner, Albert
2005-01-01
An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.
Discriminant WSRC for Large-Scale Plant Species Recognition.
Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong
2017-01-01
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Microstrip reflectarray antenna for the SCANSCAT radar application
NASA Technical Reports Server (NTRS)
Huang, John
1990-01-01
This publication presents an antenna system that has been proposed as one of the candidates for the SCANSCAT (Scanned Scatterometer) radar application. It is the mechanically steered planar microstrip reflectarray. Due to its thin, lightweight structure, the antenna's mechanical rotation will impose minimum angular momentum for the spacecraft. Since no power-dividing circuitry is needed for its many radiating microstrip patches, this electrically large array antenna demonstrates excellent power efficiency. In addition, this fairly new antenna concept can provide many significant advantages over a conventional parabolic reflector. The basic formulation for the radiation fields of the microstrip reflectarray is presented. This formulation is based on the array theory augmented by the Uniform Geometrical Theory of Diffraction (UTD). A computer code for analyzing the microstrip reflectarray's performances, such as far-field patterns, efficiency, etc., is also listed in this report. It is proposed here that a breadboard unit of this microstrip reflectarray should be constructed and tested in the future to validate the calculated performance. The antenna concept presented here can also be applied in many other types of radars where a large array antenna is needed.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
Classification of epilepsy types through global network analysis of scalp electroencephalograms
NASA Astrophysics Data System (ADS)
Lee, Uncheol; Kim, Seunghwan; Jung, Ki-Young
2006-04-01
Epilepsy is a dynamic disease in which self-organization and emergent structures occur dynamically at multiple levels of neuronal integration. Therefore, the transient relationship within multichannel electroencephalograms (EEGs) is crucial for understanding epileptic processes. In this paper, we show that the global relationship within multichannel EEGs provides us with more useful information in classifying two different epilepsy types than pairwise relationships such as cross correlation. To demonstrate this, we determine the global network structure within channels of the scalp EEG based on the minimum spanning tree method. The topological dissimilarity of the network structures from different types of temporal lobe epilepsy is described in the form of the divergence rate and is computed for 11 patients with left (LTLE) and right temporal lobe epilepsy (RTLE). We find that patients with LTLE and RTLE exhibit different large scale network structures, which emerge at the epoch immediately before the seizure onset, not in the preceding epochs. Our results suggest that patients with the two different epilepsy types display distinct large scale dynamical networks with characteristic epileptic network structures.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
Parallel Geospatial Data Management for Multi-Scale Environmental Data Analysis on GPUs
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, J.; Wei, Y.
2013-12-01
As the spatial and temporal resolutions of Earth observatory data and Earth system simulation outputs are getting higher, in-situ and/or post- processing such large amount of geospatial data increasingly becomes a bottleneck in scientific inquires of Earth systems and their human impacts. Existing geospatial techniques that are based on outdated computing models (e.g., serial algorithms and disk-resident systems), as have been implemented in many commercial and open source packages, are incapable of processing large-scale geospatial data and achieve desired level of performance. In this study, we have developed a set of parallel data structures and algorithms that are capable of utilizing massively data parallel computing power available on commodity Graphics Processing Units (GPUs) for a popular geospatial technique called Zonal Statistics. Given two input datasets with one representing measurements (e.g., temperature or precipitation) and the other one represent polygonal zones (e.g., ecological or administrative zones), Zonal Statistics computes major statistics (or complete distribution histograms) of the measurements in all regions. Our technique has four steps and each step can be mapped to GPU hardware by identifying its inherent data parallelisms. First, a raster is divided into blocks and per-block histograms are derived. Second, the Minimum Bounding Boxes (MBRs) of polygons are computed and are spatially matched with raster blocks; matched polygon-block pairs are tested and blocks that are either inside or intersect with polygons are identified. Third, per-block histograms are aggregated to polygons for blocks that are completely within polygons. Finally, for blocks that intersect with polygon boundaries, all the raster cells within the blocks are examined using point-in-polygon-test and cells that are within polygons are used to update corresponding histograms. As the task becomes I/O bound after applying spatial indexing and GPU hardware acceleration, we have developed a GPU-based data compression technique by reusing our previous work on Bitplane Quadtree (or BPQ-Tree) based indexing of binary bitmaps. Results have shown that our GPU-based parallel Zonal Statistic technique on 3000+ US counties over 20+ billion NASA SRTM 30 meter resolution Digital Elevation (DEM) raster cells has achieved impressive end-to-end runtimes: 101 seconds and 46 seconds a low-end workstation equipped with a Nvidia GTX Titan GPU using cold and hot cache, respectively; and, 60-70 seconds using a single OLCF TITAN computing node and 10-15 seconds using 8 nodes. Our experiment results clearly show the potentials of using high-end computing facilities for large-scale geospatial processing.
UNCOVERING THE INTRINSIC VARIABILITY OF GAMMA-RAY BURSTS
NASA Astrophysics Data System (ADS)
Golkhou, V. Zach; Butler, Nathaniel R
2014-08-01
We develop a robust technique to determine the minimum variability timescale for gamma-ray burst (GRB) light curves, utilizing Haar wavelets. Our approach averages over the data for a given GRB, providing an aggregate measure of signal variation while also retaining sensitivity to narrow pulses within complicated time series. In contrast to previous studies using wavelets, which simply define the minimum timescale in reference to the measurement noise floor, our approach identifies the signature of temporally smooth features in the wavelet scaleogram and then additionally identifies a break in the scaleogram on longer timescales as a signature of a true, temporally unsmooth light curve feature or features. We apply our technique to the large sample of Swift GRB gamma-ray light curves and for the first time—due to the presence of a large number of GRBs with measured redshift—determine the distribution of minimum variability timescales in the source frame. We find a median minimum timescale for long-duration GRBs in the source frame of Δtmin = 0.5 s, with the shortest timescale found being on the order of 10 ms. This short timescale suggests a compact central engine (3000 km). We discuss further implications for the GRB fireball model and present a tantalizing correlation between the minimum timescale and redshift, which may in part be due to cosmological time dilation.
Kusano, Kristofer D; Chen, Rong; Montgomery, Jade; Gabler, Hampton C
2015-09-01
Forward collision warning (FCW) systems are designed to mitigate the effects of rear-end collisions. Driver acceptance of these systems is crucial to their success, as perceived "nuisance" alarms may cause drivers to disable the systems. In order to make customizable FCW thresholds, system designers need to quantify the variation in braking behavior in the driving population. The objective of this study was to quantify the time to collision (TTC) that drivers applied the brakes during car following scenarios from a large scale naturalistic driving study (NDS). Because of the large amount of data generated by NDS, an automated algorithm was developed to identify lead vehicles using radar data recorded as part of the study. Using the search algorithm, all trips from 64 drivers from the 100-Car NDS were analyzed. A comparison of the algorithm to 7135 brake applications where the presence of a lead vehicle was manually identified found that the algorithm agreed with the human review 90.6% of the time. This study examined 72,123 trips that resulted in 2.6 million brake applications. Population distributions of the minimum, 1st, and 10th percentiles were computed for each driver in speed ranges between 3 and 60 mph in 10 mph increments. As speed increased, so did the minimum TTC experience by drivers as well as variance in TTC. Younger drivers (18-30) had lower TTC at brake application compared to older drivers (30-51+), especially at speeds between 40 mph and 60 mph. This is one of the first studies to use large scale NDS data to quantify braking behavior during car following. The results of this study can be used to design and evaluate FCW systems and calibrate traffic simulation models. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
1982-12-01
a computer program which simulates the PATRIOT battalion UH1F communication system. *.-.The detailed description of how the model performs this...the Degree of Master of Science .AI . j tf ti on-i by 5 , .... . :it Lard/or Gregory H. Swanson DLt Captain USA Graduate Computer Science I...5 Model Application..... . . . .. .. . . .. .. . . 6 Thesnis Overviev ....... o.000000000000000000000. .6 Previous Studies
Analytical Design of Evolvable Software for High-Assurance Computing
2001-02-14
Mathematical expression for the Total Sum of Squares which measures the variability that results when all values are treated as a combined sample coming from...primarily interested in background on software design and high-assurance computing, research in software architecture generation or evaluation...respectively. Those readers solely interested in the validation of a software design approach should at the minimum read Chapter 6 followed by Chapter
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi; Hixon, Duane
1993-01-01
The work done under this project was documented in detail as the Ph. D. dissertation of Dr. Duane Hixon. The objectives of the research project were evaluation of the generalized minimum residual method (GMRES) as a tool for accelerating 2-D and 3-D unsteady flows and evaluation of the suitability of the GMRES algorithm for unsteady flows, computed on parallel computer architectures.
Probabilistic QoS Analysis In Wireless Sensor Networks
2012-04-01
and A.O. Fapojuwo. TDMA scheduling with optimized energy efficiency and minimum delay in clustered wireless sensor networks . IEEE Trans. on Mobile...Research Computer Science and Engineering, Department of 5-1-2012 Probabilistic QoS Analysis in Wireless Sensor Networks Yunbo Wang University of...Wang, Yunbo, "Probabilistic QoS Analysis in Wireless Sensor Networks " (2012). Computer Science and Engineering: Theses, Dissertations, and Student
ERIC Educational Resources Information Center
Gasparinatou, Alexandra; Grigoriadou, Maria
2013-01-01
In this study, we examine the effect of background knowledge and local cohesion on learning from texts. The study is based on construction-integration model. Participants were 176 undergraduate students who read a Computer Science text. Half of the participants read a text of maximum local cohesion and the other a text of minimum local cohesion.…
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Bounds on strain in large Tertiary shear zones of SE Asia from boudinage restoration
NASA Astrophysics Data System (ADS)
Lacassin, R.; Leloup, P. H.; Tapponnier, P.
1993-06-01
We have used surface-balanced restoration of stretched, boudinaged layers to estimate minimum amounts of finite strain in the mylonitic gneisses of the Oligo-Miocene Red River-Ailao Shan shear zone (Yunnan, China) and of the Wang Chao shear zone (Thailand). The layer-parallel extension values thus obtained range between 250 and 870%. We discuss how to use such extension values to place bounds on amounts of finite shear strain in these large crustal shear zones. Assuming simple shear, these values imply minimum total and late shear strains of, respectively, 33 ± 6 and 7 ± 3 at several sites along the Red River-Ailao Shan shear zone. For the Wang Chao shear zone a minimum shear strain of 7 ± 4 is deduced. Assuming homogeneous shear would imply that minimum strike-slip displacements along these two left-lateral shear zones, which have been interpreted to result from the India-Asia collision, have been of the order of 330 ± 60 km (Red River-Ailao Shan) and 35 ± 20 km (Wang Chao).
NASA Astrophysics Data System (ADS)
Parkin, E. R.; Pittard, J. M.; Corcoran, M. F.; Hamaguchi, K.
2011-01-01
Three-dimensional adaptive mesh refinement hydrodynamical simulations of the wind-wind collision between the enigmatic supermassive star η Car and its mysterious companion star are presented which include radiative driving of the stellar winds, gravity, optically thin radiative cooling, and orbital motion. Simulations with static stars with a periastron passage separation reveal that the preshock companion star's wind speed is sufficiently reduced so that radiative cooling in the postshock gas becomes important, permitting the runaway growth of nonlinear thin-shell instabilities (NTSIs) which massively distort the wind-wind collision region (WCR). However, large-scale simulations, which include the orbital motion of the stars, show that orbital motion reduces the impact of radiative inhibition and thus increases the acquired preshock velocities. As such, the postshock gas temperature and cooling time see a commensurate increase, and sufficient gas pressure is preserved to stabilize the WCR against catastrophic instability growth. We then compute synthetic X-ray spectra and light curves and find that, compared to previous models, the X-ray spectra agree much better with XMM-Newton observations just prior to periastron. The narrow width of the 2009 X-ray minimum can also be reproduced. However, the models fail to reproduce the extended X-ray minimum from previous cycles. We conclude that the key to explaining the extended X-ray minimum is the rate of cooling of the companion star's postshock wind. If cooling is rapid then powerful NTSIs will heavily disrupt the WCR. Radiative inhibition of the companion star's preshock wind, albeit with a stronger radiation-wind coupling than explored in this work, could be an effective trigger.
20 CFR 229.48 - Family maximum.
Code of Federal Regulations, 2014 CFR
2014-04-01
... maximum. The spouse's and child(ren)'s share of the Overall Minimum PIA are reduced if the total benefits... adjustment is before adjustment for age or other benefits. The spouse and child(ren)'s benefits are computed...
20 CFR 229.48 - Family maximum.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maximum. The spouse's and child(ren)'s share of the Overall Minimum PIA are reduced if the total benefits... adjustment is before adjustment for age or other benefits. The spouse and child(ren)'s benefits are computed...
20 CFR 229.48 - Family maximum.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maximum. The spouse's and child(ren)'s share of the Overall Minimum PIA are reduced if the total benefits... adjustment is before adjustment for age or other benefits. The spouse and child(ren)'s benefits are computed...
49 CFR 1152.27 - Financial assistance procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (including the cost of transporting removed materials to point of sale or point of storage for relay use... constitutional minimum value is computed without regard to labor protection costs. (7) Within 10 days of the...
49 CFR 1152.27 - Financial assistance procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (including the cost of transporting removed materials to point of sale or point of storage for relay use... constitutional minimum value is computed without regard to labor protection costs. (7) Within 10 days of the...
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184
Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody
2018-04-01
To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.
NASA Astrophysics Data System (ADS)
Leon-Rios, S.; Aguiar, A. L.; Bie, L.; Edwards, B.; Fuenzalida Velasco, A. J.; Holt, J.; Garth, T.; González, P. J.; Rietbrock, A.; Agurto-Detzel, H.; Charvis, P.; Font, Y.; Nocquet, J. M.; Regnier, M. M.; Renouard, A.; Mercerat, D.; Pernoud, M.; Beck, S. L.; Meltzer, A.; Soto-Cordero, L.; Alvarado, A. P.; Perrault, M.; Ruiz, M. C.; Santo, J.
2017-12-01
On 16th April 2016, a Mw 7.8 mega-thrust earthquake occurred in northern Ecuador, close to the city of Pedernales. The event that ruptured an area of 120 x 60 km led to a deployment of a large array of seismic instruments as part of a collaborative project between the Geophysical Institute of Ecuador (IGEPN), Lehigh University (USA), University of Arizona (USA), Geoazur (France) and the University of Liverpool (UK). This dense seismic network, with more than 80 stations, includes broadband, short period, strong motion and OBS instruments were recording up to one year after the mainshock. Using the recorded data set, we manually analysed and located 450 events. Selection was based on the largest aftershocks (Ml > 3.5 from the IGEPN catalogue) and additional preliminary automatic locations to increase the observation density in the southern part of the network. High quality P and S arrival times plus several reference velocity structures were used to create more than 80.000 input models in order to obtain a minimum 1D velocity model and associated P and S waves station correction terms. Aftershock locations are concentrated in NW-SE striking lineaments reaching the trench. Additionally, we computed moment tensor solutions for a subset of earthquakes to independently confirm hypocentre depths using a full waveform simulation approach. Based on this analysis we can identify normal and strike-slip events located in the marine forearc and close to the trench. This type of activity has been observed in previous megathrust earthquakes (e.g. Maule 2010 and Tohoku-Oki 2011), and might be associated with extensional re-activation of existing fault systems due to a large event located on the megathrust fault.
Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999
NASA Technical Reports Server (NTRS)
Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)
1999-01-01
The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.
Research Institute for Advanced Computer Science
NASA Technical Reports Server (NTRS)
Gross, Anthony R. (Technical Monitor); Leiner, Barry M.
2000-01-01
The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
Dark matter statistics for large galaxy catalogs: power spectra and covariance matrices
NASA Astrophysics Data System (ADS)
Klypin, Anatoly; Prada, Francisco
2018-06-01
Large-scale surveys of galaxies require accurate theoretical predictions of the dark matter clustering for thousands of mock galaxy catalogs. We demonstrate that this goal can be achieve with the new Parallel Particle-Mesh (PM) N-body code GLAM at a very low computational cost. We run ˜22, 000 simulations with ˜2 billion particles that provide ˜1% accuracy of the dark matter power spectra P(k) for wave-numbers up to k ˜ 1hMpc-1. Using this large data-set we study the power spectrum covariance matrix. In contrast to many previous analytical and numerical results, we find that the covariance matrix normalised to the power spectrum C(k, k΄)/P(k)P(k΄) has a complex structure of non-diagonal components: an upturn at small k, followed by a minimum at k ≈ 0.1 - 0.2 hMpc-1, and a maximum at k ≈ 0.5 - 0.6 hMpc-1. The normalised covariance matrix strongly evolves with redshift: C(k, k΄)∝δα(t)P(k)P(k΄), where δ is the linear growth factor and α ≈ 1 - 1.25, which indicates that the covariance matrix depends on cosmological parameters. We also show that waves longer than 1h-1Gpc have very little impact on the power spectrum and covariance matrix. This significantly reduces the computational costs and complexity of theoretical predictions: relatively small volume ˜(1h-1Gpc)3 simulations capture the necessary properties of dark matter clustering statistics. As our results also indicate, achieving ˜1% errors in the covariance matrix for k < 0.50 hMpc-1 requires a resolution better than ɛ ˜ 0.5h-1Mpc.
Analysis of a generalized dual reflector antenna system using physical optics
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Lagin, Alan R.
1992-01-01
Reflector antennas are widely used in communication satellite systems because they provide high gain at low cost. Offset-fed single paraboloids and dual reflector offset Cassegrain and Gregorian antennas with multiple focal region feeds provide a simple, blockage-free means of forming multiple, shaped, and isolated beams with low sidelobes. Such antennas are applicable to communications satellite frequency reuse systems and earth stations requiring access to several satellites. While the single offset paraboloid has been the most extensively used configuration for the satellite multiple-beam antenna, the trend toward large apertures requiring minimum scanned beam degradation over the field of view 18 degrees for full earth coverage from geostationary orbit may lead to impractically long focal length and large feed arrays. Dual reflector antennas offer packaging advantages and more degrees of design freedom to improve beam scanning and cross-polarization properties. The Cassegrain and Gregorian antennas are the most commonly used dual reflector antennas. A computer program for calculating the secondary pattern and directivity of a generalized dual reflector antenna system was developed and implemented at LeRC. The theoretical foundation for this program is based on the use of physical optics methodology for describing the induced currents on the sub-reflector and main reflector. The resulting induced currents on the main reflector are integrated to obtain the antenna far-zone electric fields. The computer program is verified with other physical optics programs and with measured antenna patterns. The comparison shows good agreement in far-field sidelobe reproduction and directivity.
Giegerich, Robert; Voss, Björn; Rehmsmeier, Marc
2004-01-01
The function of a non-protein-coding RNA is often determined by its structure. Since experimental determination of RNA structure is time-consuming and expensive, its computational prediction is of great interest, and efficient solutions based on thermodynamic parameters are known. Frequently, however, the predicted minimum free energy structures are not the native ones, leading to the necessity of generating suboptimal solutions. While this can be accomplished by a number of programs, the user is often confronted with large outputs of similar structures, although he or she is interested in structures with more fundamental differences, or, in other words, with different abstract shapes. Here, we formalize the concept of abstract shapes and introduce their efficient computation. Each shape of an RNA molecule comprises a class of similar structures and has a representative structure of minimal free energy within the class. Shape analysis is implemented in the program RNAshapes. We applied RNAshapes to the prediction of optimal and suboptimal abstract shapes of several RNAs. For a given energy range, the number of shapes is considerably smaller than the number of structures, and in all cases, the native structures were among the top shape representatives. This demonstrates that the researcher can quickly focus on the structures of interest, without processing up to thousands of near-optimal solutions. We complement this study with a large-scale analysis of the growth behaviour of structure and shape spaces. RNAshapes is available for download and as an online version on the Bielefeld Bioinformatics Server.
NASA Astrophysics Data System (ADS)
Suresh, A.; Dikpati, M.; Burkepile, J.; de Toma, G.
2013-12-01
The structure of the Sun's corona varies with solar cycle, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. Why does this pattern occur? It is widely accepted that large-scale coronal structure is governed by magnetic fields, which are most likely generated by the dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential field source surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation and above the photosphere these dynamo-generated fields are extended from the photosphere to the corona by using a potential field source surface model. Under the assumption of axisymmetry, the large-scale poloidal fields can be written in terms of the curl of a vector potential. Since from the photosphere and above the magnetic diffusivity is essentially infinite, the evolution of the vector potential is given by Laplace's Equation, the solution of which is obtained in the form of a first order Associated Legendre Polynomial. By taking linear combinations of these polynomial terms, we find solutions that match more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986-1991), we compute the coefficients of the Associated Legendre Polynomials up to degree eight and compare with observation. We reproduce some previous results that at minimum the dipole term dominates, but that this term fades with the progress of the cycle and higher order multipole terms begin to dominate. We find that the amplitudes of these terms are not exactly the same in the two limbs, indicating that there is some phi dependence. Furthermore, by comparing the solar minimum corona during the past three minima (1986, 1996, and 2008), we find that, while both the 1986 and 1996 minima were dipolar, the minimum in 2008 was unusual, as there was departure from a dipole. In order to investigate the physical cause of this departure from dipole, we implement north-south asymmetry in the surface source of the magnetic fields in our model, and find that such n/s asymmetry in solar cycle could be one of the reasons for this departure. This work is partially supported by NASA's LWS grant with award number NNX08AQ34G. NCAR is sponsored by the NSF.
2015-12-15
from the ground to space solar minimum and solar maximum 5a. CONTRACT NUMBER BAA-76-11-01 5b. GRANT NUMBER N00173-12-1G010 5c. PROGRAM ELEMENT...atmospheric behavior from the ground to space under solar minimum and solar maximum conditions (Contract No.: N00173-12-1-G010 NRL) Project Summary...Dynamical response to solar radiative forcing is a crucial and poorly understood mechanisms. We propose to study the impacts of large dynamical events
Energy consumption program: A computer model simulating energy loads in buildings
NASA Technical Reports Server (NTRS)
Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.
1978-01-01
The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.
Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.
NASA Astrophysics Data System (ADS)
Hur, Sun Hae
The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.
NASA Astrophysics Data System (ADS)
Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark
2010-05-01
Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations and ignition probability grids (1000 x 1000 m) built from historical fire data (1995-2007). The simulation outputs were then examined to understand relationships between burn probability and specific vegetation types and ignition sources. Wildfire threats to specific values of human interest were quantified to map landscape patterns of wildfire risk. The simulation outputs also allowed us to differentiate between areas of the landscape that were progenitors of fires versus "victims" of large fires. The results provided spatially explicit data on wildfire likelihood and intensity that can be used in a variety of strategic and tactical planning forums to mitigate wildfire threats to human and other values in the Mediterranean Basin.
A Study of Minimum Competency Testing Programs. Final Program Development Resource Document.
ERIC Educational Resources Information Center
Gorth, William Phillip; Perkins, Marcy R.
This resource document represents the integration of both practice and theory related to minimum competency testing (MCT), and is largely based on information collected in a nationwide survey of MCT programs. Chapter 1, To Implement or Not to Implement MCT, by Marcy R. Perkins, presents a definition of MCT and a discussion of the perceived…
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
Parametric study of minimum converter loss in an energy-storage dc-to-dc converter
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1982-01-01
Through a combination of analytical and numerical minimization procedures, a converter design that results in the minimum total converter loss (including core loss, winding loss, capacitor and energy-storage-reactor loss, and various losses in the semiconductor switches) is obtained. Because the initial phase involves analytical minimization, the computation time required by the subsequent phase of numerical minimization is considerably reduced in this combination approach. The effects of various loss parameters on the optimum values of the design variables are also examined.
About neighborhood counting measure metric and minimum risk metric.
Argentini, Andrea; Blanzieri, Enrico
2010-04-01
In a 2006 TPAMI paper, Wang proposed the Neighborhood Counting Measure, a similarity measure for the k-NN algorithm. In his paper, Wang mentioned the Minimum Risk Metric (MRM), an early distance measure based on the minimization of the risk of misclassification. Wang did not compare NCM to MRM because of its allegedly excessive computational load. In this comment paper, we complete the comparison that was missing in Wang's paper and, from our empirical evaluation, we show that MRM outperforms NCM and that its running time is not prohibitive as Wang suggested.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
NASA Technical Reports Server (NTRS)
Grantham, W. D.; Deal, P. L.
1974-01-01
A fixed-base simulator study was conducted to determine the minimum acceptable level of longitudinal stability for a representative turbofan STOL (short take-off and landing) transport airplane during the landing approach. Real-time digital simulation techniques were used. The computer was programed with equations of motion for six degrees of freedom, and the aerodynamic inputs were based on measured wind-tunnel data. The primary piloting task was an instrument approach to a breakout at a 60-m (200-ft) ceiling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Study for application of a sounding rocket experiment to spacelab/shuttle mission
NASA Technical Reports Server (NTRS)
Code, A. D.
1975-01-01
An inexpensive adaptation of rocket-size packages to Spacelab/Shuttle use was studied. A two-flight project extending over two years was baselined, requiring 80 man-months of effort. It was concluded that testing should be held to a minimum since rocket packages seem to be able to tolerate shuttle vibration and noise levels. A standard, flexible control and data collection language such as FORTH should be used rather than a computation language such as FORTRAN in order to hold programming costs to a minimum.
Constantin, Dragoş E; Fahrig, Rebecca; Keall, Paul J
2011-07-01
Using magnetic resonance imaging (MRI) for real-time guidance during radiotherapy is an active area of research and development. One aspect of the problem is the influence of the MRI scanner, modeled here as an external magnetic field, on the medical linear accelerator (linac) components. The present work characterizes the behavior of two medical linac electron guns with external magnetic fields for in-line and perpendicular orientations of the linac with respect to the MRI scanner. Two electron guns, Litton L-2087 and Varian VTC6364, are considered as representative models for this study. Emphasis was placed on the in-line design approach in which case the MRI scanner and the linac axes of symmetry coincide and assumes no magnetic shielding of the linac. For the in-line case, the magnetic field from a 0.5 T open MRI (GE Signa SP) magnet with a 60 cm gap between its poles was computed and used in full three dimensional (3D) space charge simulations, whereas for the perpendicular case the magnetic field was constant. For the in-line configuration, it is shown that the electron beam is not deflected from the axis of symmetry of the gun and the primary beam current does not vanish even at very high values of the magnetic field, e.g., 0.16 T. As the field strength increases, the primary beam current has an initial plateau of constant value after which its value decreases to a minimum corresponding to a field strength of approximately 0.06 T. After the minimum is reached, the current starts to increase slowly. For the case when the beam current computation is performed at the beam waist position the initial plateau ends at 0.016 T for Litton L-2087 and at 0.012 T for Varian VTC6364. The minimum value of the primary beam current is 27.5% of the initial value for Litton L-2087 and 22.9% of the initial value for Varian VTC6364. The minimum current is reached at 0.06 and 0.062 T for Litton L-2087 and Varian VTC6364, respectively. At 0.16 T the beam current increases to 40.2 and 31.4% from the original value of the current for Litton L-2087 and Varian VTC6364, respectively. In contrast, for the case when the electron gun is perpendicular to the magnetic field, the electron beam is deflected from the axis of symmetry even at small values of the magnetic field. As the strength of the magnetic field increases, so does the beam deflection, leading to a sharp decrease of the primary beam current which vanishes at about 0.007 T for Litton L-2087 and at 0.006 T for Varian VTC6364, respectively. At zero external field, the beam rms emittance computed at beam waist is 1.54 and 1.29n-mm-mrad for Litton L-2087 and Varian VTC6364, respectively. For the inline configuration, there are two particular values of the external field where the beam rms emittance reaches a minimum. Litton L-2087 rms emittance reaches a minimum of 0.72n and 2.01 n-mm-mrad at 0.026 and 0.132 T, respectively. Varian VTC6364 rms emittance reaches a minimum of 0.34n and 0.35n-mm-mrad at 0.028 and 0.14 T, respectively. Beam radius dependence on the external field is shown for the in-line configuration for both electron guns. 3D space charge simulation of two electron guns, Litton L-2087 and Varian VTC6364, were performed for in-line and perpendicular external magnetic fields. A consistent behavior of Pierce guns in external magnetic fields was proven. For the in-line configuration, the primary beam current does not vanish but a large reduction of beam current (up to 77.1%) is observed at higher field strengths; the beam directionality remains unchanged. It was shown that for a perpendicular configuration the current vanishes due to beam bending under the action of the Lorentz force. For in-line configuration it was determined that the rms beam emittance reaches two minima for relatively high values of the external magnetic field.
Constantin, Dragoş E.; Fahrig, Rebecca; Keall, Paul J.
2011-01-01
Purpose: Using magnetic resonance imaging (MRI) for real-time guidance during radiotherapy is an active area of research and development. One aspect of the problem is the influence of the MRI scanner, modeled here as an external magnetic field, on the medical linear accelerator (linac) components. The present work characterizes the behavior of two medical linac electron guns with external magnetic fields for in-line and perpendicular orientations of the linac with respect to the MRI scanner. Methods: Two electron guns, Litton L-2087 and Varian VTC6364, are considered as representative models for this study. Emphasis was placed on the in-line design approach in which case the MRI scanner and the linac axes of symmetry coincide and assumes no magnetic shielding of the linac. For the in-line case, the magnetic field from a 0.5 T open MRI (GE Signa SP) magnet with a 60 cm gap between its poles was computed and used in full three dimensional (3D) space charge simulations, whereas for the perpendicular case the magnetic field was constant. Results: For the in-line configuration, it is shown that the electron beam is not deflected from the axis of symmetry of the gun and the primary beam current does not vanish even at very high values of the magnetic field, e.g., 0.16 T. As the field strength increases, the primary beam current has an initial plateau of constant value after which its value decreases to a minimum corresponding to a field strength of approximately 0.06 T. After the minimum is reached, the current starts to increase slowly. For the case when the beam current computation is performed at the beam waist position the initial plateau ends at 0.016 T for Litton L-2087 and at 0.012 T for Varian VTC6364. The minimum value of the primary beam current is 27.5% of the initial value for Litton L-2087 and 22.9% of the initial value for Varian VTC6364. The minimum current is reached at 0.06 and 0.062 T for Litton L-2087 and Varian VTC6364, respectively. At 0.16 T the beam current increases to 40.2 and 31.4% from the original value of the current for Litton L-2087 and Varian VTC6364, respectively. In contrast, for the case when the electron gun is perpendicular to the magnetic field, the electron beam is deflected from the axis of symmetry even at small values of the magnetic field. As the strength of the magnetic field increases, so does the beam deflection, leading to a sharp decrease of the primary beam current which vanishes at about 0.007 T for Litton L-2087 and at 0.006 T for Varian VTC6364, respectively. At zero external field, the beam rms emittance computed at beam waist is 1.54 and 1.29π-mm-mrad for Litton L-2087 and Varian VTC6364, respectively. For the in-line configuration, there are two particular values of the external field where the beam rms emittance reaches a minimum. Litton L-2087 rms emittance reaches a minimum of 0.72π and 2.01π-mm-mrad at 0.026 and 0.132 T, respectively. Varian VTC6364 rms emittance reaches a minimum of 0.34π and 0.35π-mm-mrad at 0.028 and 0.14 T, respectively. Beam radius dependence on the external field is shown for the in-line configuration for both electron guns. Conclusions: 3D space charge simulation of two electron guns, Litton L-2087 and Varian VTC6364, were performed for in-line and perpendicular external magnetic fields. A consistent behavior of Pierce guns in external magnetic fields was proven. For the in-line configuration, the primary beam current does not vanish but a large reduction of beam current (up to 77.1%) is observed at higher field strengths; the beam directionality remains unchanged. It was shown that for a perpendicular configuration the current vanishes due to beam bending under the action of the Lorentz force. For in-line configuration it was determined that the rms beam emittance reaches two minima for relatively high values of the external magnetic field. PMID:21859019
NASA Astrophysics Data System (ADS)
Jain, A.
2017-08-01
Computer based method can help in discovery of leads and can potentially eliminate chemical synthesis and screening of many irrelevant compounds, and in this way, it save time as well as cost. Molecular modeling systems are powerful tools for building, visualizing, analyzing and storing models of complex molecular structure that can help to interpretate structure activity relationship. The use of various techniques of molecular mechanics and dynamics and software in Computer aided drug design along with statistics analysis is powerful tool for the medicinal chemistry to synthesis therapeutic and effective drugs with minimum side effect.
Code of Federal Regulations, 2010 CFR
2010-01-01
... a minimum, include the air transportation and electronics industries in the following North American... 11 Electronics Mechanic 11 Electronic Computer Mechanic 11 Television Station Mechanic 11 (d) The...
Electron Optics Cannot Be Taught through Computation?
ERIC Educational Resources Information Center
van der Merwe, J. P.
1980-01-01
Describes how certain concepts basic to electron optics may be introduced to undergraduate physics students by calculating trajectories of charged particles through electrostatic fields which can be evaluated on minicomputers with a minimum of programing effort. (Author/SA)
The maximum rate of mammal evolution
NASA Astrophysics Data System (ADS)
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Giovannini, Federico; Savino, Giovanni; Pierini, Marco; Baldanzini, Niccolò
2013-10-01
In the recent years the autonomous emergency brake (AEB) was introduced in the automotive field to mitigate the injury severity in case of unavoidable collisions. A crucial element for the activation of the AEB is to establish when the obstacle is no longer avoidable by lateral evasive maneuvers (swerving). In the present paper a model to compute the minimum swerving distance needed by a powered two-wheeler (PTW) to avoid the collision against a fixed obstacle, named last-second swerving model (Lsw), is proposed. The effectiveness of the model was investigated by an experimental campaign involving 12 volunteers riding a scooter equipped with a prototype autonomous emergency braking, named motorcycle autonomous emergency braking system (MAEB). The tests showed the performance of the model in evasive trajectory computation for different riding styles and fixed obstacles. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
NASA Astrophysics Data System (ADS)
Farrokhabadi, Amin; Abadian, Naeimeh; Kanjouri, Faramarz; Abadyan, Mohamadreza
2014-05-01
The quantum vacuum fluctuation i.e., Casimir attraction can induce mechanical instability in ultra-small devices. Previous researchers have focused on investigating the instability in structures with planar or rectangular cross-section. However, to the best knowledge of the authors, no attention has been paid for modeling this phenomenon in the structures made of nanowires with cylindrical geometry. In this regard, present work is dedicated to simulate the Casimir force-induced instability of freestanding nanoactuator and nanotweezers made of conductive nanowires with circular cross-section. To compute the quantum vacuum fluctuations, two approaches i.e., the proximity force approximation (for small separations) and scattering theory approximation (for large separations), are considered. The Euler-beam model is employed, in conjunction with the size-dependent modified couple stress continuum theory, to derive governing equations of the nanostructures. The governing nonlinear equations are solved via three different approaches, i.e., using lumped parameter model, modified variation iteration method (MVIM) and numerical solution. The deflection of the nanowire from zero to the final stable position is simulated as the Casimir force is increased from zero to its critical value. The detachment length and minimum gap, which prevent the instability, are computed for both nanosystems.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Development of software for computing forming information using a component based approach
NASA Astrophysics Data System (ADS)
Ko, Kwang Hee; Park, Jiing Seo; Kim, Jung; Kim, Young Bum; Shin, Jong Gye
2009-12-01
In shipbuilding industry, the manufacturing technology> has advanced at an unprecedented pace for the last decade. As a result, many automatic systems for cutting, welding, etc. have been developed and employed in the manufacturing process and accordingly the productivity has been increased drastically. Despite such improvement in the manufacturing technology', however, development of an automatic system for fabricating a curved hull plate remains at the beginning stage since hardware and software for the automation of the curved hull fabrication process should be developed differently depending on the dimensions of plates, forming methods and manufacturing processes of each shipyard. To deal with this problem, it is necessary> to create a "plug-in ''framework, which can adopt various kinds of hardware and software to construct a full automatic fabrication system. In this paper, a frame-work for automatic fabrication of curved hull plates is proposed, which consists of four components and related software. In particular the software module for computing fabrication information is developed by using the ooCBD development methodology; which can interface with other hardware and software with minimum effort. Examples of the proposed framework applied to medium and large shipyards are presented.
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.
NASA Astrophysics Data System (ADS)
Sable, Peter; Helminiak, Nathaniel; Harstad, Eric; Gullerud, Arne; Hollenshead, Jeromy; Hertel, Eugene; Sandia National Laboratories Collaboration; Marquette University Collaboration
2017-06-01
With the increasing use of hydrocodes in modeling and system design, experimental benchmarking of software has never been more important. While this has been a large area of focus since the inception of computational design, comparisons with temperature data are sparse due to experimental limitations. A novel temperature measurement technique, magnetic diffusion analysis, has enabled the acquisition of in-flight temperature measurements of hyper velocity projectiles. Using this, an AC-14 bare shaped charge and an LX-14 EFP, both with copper linings, were simulated using CTH to benchmark temperature against experimental results. Particular attention was given to the slug temperature profiles after separation, and the effect of varying equation-of-state and strength models. Simulations are in agreement with experimental, attaining better than 2% error between observed shaped charge temperatures. This varied notably depending on the strength model used. Similar observations were made simulating the EFP case, with a minimum 4% deviation. Jet structures compare well with radiographic images and are consistent with ALEGRA simulations previously conducted. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Single event upset vulnerability of selected 4K and 16K CMOS static RAM's
NASA Technical Reports Server (NTRS)
Kolasinski, W. A.; Koga, R.; Blake, J. B.; Brucker, G.; Pandya, P.; Petersen, E.; Price, W.
1982-01-01
Upset thresholds for bulk CMOS and CMOS/SOS RAMS were deduced after bombardment of the devices with 140 MeV Kr, 160 MeV Ar, and 33 MeV O beams in a cyclotron. The trials were performed to test prototype devices intended for space applications, to relate feature size to the critical upset charge, and to check the validity of computer simulation models. The tests were run on 4 and 1 K memory cells with 6 transistors, in either hardened or unhardened configurations. The upset cross sections were calculated to determine the critical charge for upset from the soft errors observed in the irradiated cells. Computer simulations of the critical charge were found to deviate from the experimentally observed variation of the critical charge as the square of the feature size. Modeled values of series resistors decoupling the inverter pairs of memory cells showed that above some minimum resistance value a small increase in resistance produces a large increase in the critical charge, which the experimental data showed to be of questionable validity unless the value is made dependent on the maximum allowed read-write time.
NASA Astrophysics Data System (ADS)
Tian, Lei; Waller, Laura
2017-05-01
Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.
Parametric-Studies and Data-Plotting Modules for the SOAP
NASA Technical Reports Server (NTRS)
2008-01-01
"Parametric Studies" and "Data Table Plot View" are the names of software modules in the Satellite Orbit Analysis Program (SOAP). Parametric Studies enables parameterization of as many as three satellite or ground-station attributes across a range of values and computes the average, minimum, and maximum of a specified metric, the revisit time, or 21 other functions at each point in the parameter space. This computation produces a one-, two-, or three-dimensional table of data representing statistical results across the parameter space. Inasmuch as the output of a parametric study in three dimensions can be a very large data set, visualization is a paramount means of discovering trends in the data (see figure). Data Table Plot View enables visualization of the data table created by Parametric Studies or by another data source: this module quickly generates a display of the data in the form of a rotatable three-dimensional-appearing plot, making it unnecessary to load the SOAP output data into a separate plotting program. The rotatable three-dimensionalappearing plot makes it easy to determine which points in the parameter space are most desirable. Both modules provide intuitive user interfaces for ease of use.
Fast autonomous holographic adaptive optics
NASA Astrophysics Data System (ADS)
Andersen, G.
2010-07-01
We have created a new adaptive optics system using a holographic modal wavefront sensing method capable of autonomous (computer-free) closed-loop control of a MEMS deformable mirror. A multiplexed hologram is recorded using the maximum and minimum actuator positions on the deformable mirror as the "modes". On reconstruction, an input beam will be diffracted into pairs of focal spots - the ratio of particular pairs determines the absolute wavefront phase at a particular actuator location. The wavefront measurement is made using a fast, sensitive photo-detector array such as a multi-pixel photon counters. This information is then used to directly control each actuator in the MEMS DM without the need for any computer in the loop. We present initial results of a 32-actuator prototype device. We further demonstrate that being an all-optical, parallel processing scheme, the speed is independent of the number of actuators. In fact, the limitations on speed are ultimately determined by the maximum driving speed of the DM actuators themselves. Finally, being modal in nature, the system is largely insensitive to both obscuration and scintillation. This should make it ideal for laser beam transmission or imaging under highly turbulent conditions.
Large Scale Geologic Controls on Hydraulic Stimulation
NASA Astrophysics Data System (ADS)
McLennan, J. D.; Bhide, R.
2014-12-01
When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).
Minimalist Design of Allosterically Regulated Protein Catalysts.
Makhlynets, O V; Korendovych, I V
2016-01-01
Nature facilitates chemical transformations with exceptional selectivity and efficiency. Despite a tremendous progress in understanding and predicting protein function, the overall problem of designing a protein catalyst for a given chemical transformation is far from solved. Over the years, many design techniques with various degrees of complexity and rational input have been developed. Minimalist approach to protein design that focuses on the bare minimum requirements to achieve activity presents several important advantages. By focusing on basic physicochemical properties and strategic placing of only few highly active residues one can feasibly evaluate in silico a very large variety of possible catalysts. In more general terms minimalist approach looks for the mere possibility of catalysis, rather than trying to identify the most active catalyst possible. Even very basic designs that utilize a single residue introduced into nonenzymatic proteins or peptide bundles are surprisingly active. Because of the inherent simplicity of the minimalist approach computational tools greatly enhance its efficiency. No complex calculations need to be set up and even a beginner can master this technique in a very short time. Here, we present a step-by-step protocol for minimalist design of functional proteins using basic, easily available, and free computational tools. © 2016 Elsevier Inc. All rights reserved.
RNAmutants: a web server to explore the mutational landscape of RNA secondary structures
Waldispühl, Jerome; Devadas, Srinivas; Berger, Bonnie; Clote, Peter
2009-01-01
The history and mechanism of molecular evolution in DNA have been greatly elucidated by contributions from genetics, probability theory and bioinformatics—indeed, mathematical developments such as Kimura's neutral theory, Kingman's coalescent theory and efficient software such as BLAST, ClustalW, Phylip, etc., provide the foundation for modern population genetics. In contrast to DNA, the function of most noncoding RNA depends on tertiary structure, experimentally known to be largely determined by secondary structure, for which dynamic programming can efficiently compute the minimum free energy secondary structure. For this reason, understanding the effect of pointwise mutations in RNA secondary structure could reveal fundamental properties of structural RNA molecules and improve our understanding of molecular evolution of RNA. The web server RNAmutants provides several efficient tools to compute the ensemble of low-energy secondary structures for all k-mutants of a given RNA sequence, where k is bounded by a user-specified upper bound. As we have previously shown, these tools can be used to predict putative deleterious mutations and to analyze regulatory sequences from the hepatitis C and human immunodeficiency genomes. Web server is available at http://bioinformatics.bc.edu/clotelab/RNAmutants/, and downloadable binaries at http://rnamutants.csail.mit.edu/. PMID:19531740
Using NCLab-karel to improve computational thinking skill of junior high school students
NASA Astrophysics Data System (ADS)
Kusnendar, J.; Prabawa, H. W.
2018-05-01
Increasingly human interaction with technology and the increasingly complex development of digital technology world make the theme of computer science education interesting to study. Previous studies on Computer Literacy and Competency reveal that Indonesian teachers in general have fairly high computational skill, but their skill utilization are limited to some applications. This engenders limited and minimum computer-related learning for the students. On the other hand, computer science education is considered unrelated to real-world solutions. This paper attempts to address the utilization of NCLab- Karel in shaping the computational thinking in students. This computational thinking is believed to be able to making learn students about technology. Implementation of Karel utilization provides information that Karel is able to increase student interest in studying computational material, especially algorithm. Observations made during the learning process also indicate the growth and development of computing mindset in students.