Sample records for minimum computation time

  1. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  2. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  3. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  4. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  5. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  6. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  7. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  8. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  9. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  10. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  11. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  12. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  13. Determining collective barrier operation skew in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    2015-11-24

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less

  14. Determining collective barrier operation skew in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraj, Daniel A.

    Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less

  15. 36 CFR 1120.52 - Computerized records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... organizations and upon the particular types of computer and associated equipment and the amounts of time on such... from the computer which permits copying the printout, the material will be made available at the per... information from computerized records frequently involves a minimum computer time cost of approximately $100...

  16. 36 CFR 1120.52 - Computerized records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... organizations and upon the particular types of computer and associated equipment and the amounts of time on such... from the computer which permits copying the printout, the material will be made available at the per... information from computerized records frequently involves a minimum computer time cost of approximately $100...

  17. 12 CFR 1750.4 - Minimum capital requirement computation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... amounts: (1) 2.50 percent times the aggregate on-balance sheet assets of the Enterprise; (2) 0.45 percent times the unpaid principal balance of mortgage-backed securities and substantially equivalent... last day of the quarter just ended (or the date for which the minimum capital report is filed, if...

  18. Temporal modulation transfer functions in auditory receptor fibres of the locust ( Locusta migratoria L.).

    PubMed

    Prinz, P; Ronacher, B

    2002-08-01

    The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.

  19. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  20. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  1. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  2. Singular perturbation techniques for real time aircraft trajectory optimization and control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1982-01-01

    The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.

  3. 12 CFR 1750.4 - Minimum capital requirement computation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... amounts: (1) 2.50 percent times the aggregate on-balance sheet assets of the Enterprise; (2) 0.45 percent times the unpaid principal balance of mortgage-backed securities and substantially equivalent... current market value of posted qualifying collateral, computed in accordance with appendix A to this...

  4. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  5. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1962-01-01

    The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.

  6. Apparatus and method for closed-loop control of reactor power in minimum time

    DOEpatents

    Bernard, Jr., John A.

    1988-11-01

    Closed-loop control law for altering the power level of nuclear reactors in a safe manner and without overshoot and in minimum time. Apparatus is provided for moving a fast-acting control element such as a control rod or a control drum for altering the nuclear reactor power level. A computer computes at short time intervals either the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e '.rho.-.SIGMA..beta..sub.i (.lambda..sub.i -.lambda..sub.e ')+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e '.omega.] or the function: .rho.=(.beta.-.rho.).omega.-.lambda..sub.e .rho.-(.lambda..sub.e /.lambda..sub.e)(.beta.-.rho.)+l* .omega.+l* [.omega..sup.2 +.lambda..sub.e .omega.-(.lambda..sub.e /.lambda..sub.e).omega.] These functions each specify the rate of change of reactivity that is necessary to achieve a specified rate of change of reactor power. The direction and speed of motion of the control element is altered so as to provide the rate of reactivity change calculated using either or both of these functions thereby resulting in the attainment of a new power level without overshoot and in minimum time. These functions are computed at intervals of approximately 0.01-1.0 seconds depending on the specific application.

  7. 20 CFR 404.261 - Computing your special minimum primary insurance amount.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your special minimum primary..., SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary Insurance Amounts § 404.261 Computing your special minimum primary insurance amount. (a) Years of coverage...

  8. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  9. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Evaluating Computer Integration in the Elementary School: A Step-by-Step Guide.

    ERIC Educational Resources Information Center

    Mowe, Richard

    This handbook was written to enable elementary school educators to conduct formative evaluations of their computer integrated instruction (CII) programs in minimum time. CII is defined as the use of computer software, such as word processing, database, and graphics programs, to help students solve problems or work more productively. The first…

  11. Theoretical study of network design methodologies for the aerial relay system. [energy consumption and air traffic control

    NASA Technical Reports Server (NTRS)

    Rivera, J. M.; Simpson, R. W.

    1980-01-01

    The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.

  12. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  13. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  14. 20 CFR 704.103 - Removal of certain minimums when computing or paying compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Removal of certain minimums when computing or... PROVISIONS FOR LHWCA EXTENSIONS Defense Base Act § 704.103 Removal of certain minimums when computing or... benefits are to be computed under section 9 of the LHWCA, 33 U.S.C. 909, shall not apply in computing...

  15. Parallel computation of GA search for the artery shape determinants with CFD

    NASA Astrophysics Data System (ADS)

    Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.

    2010-06-01

    We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.

  16. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  17. A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence

    PubMed Central

    Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M

    2014-01-01

    Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578

  18. 20 CFR 229.41 - When a spouse can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE When... annuity rate under the overall minimum. A spouse's inclusion in the computation of the overall minimum...

  19. An Analysis of a Puff Dispersion Model for a Coastal Region.

    DTIC Science & Technology

    1982-06-01

    gril is determined by computing their movement for a finite time step using a measured wind field. The growth and buoyancy of the puffs are computed...advection step. The grid concentrations can be allowed to accumulate or simply be updated with the lat- est instantaneous value. & minimum gril concentration

  20. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  1. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  2. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  3. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  4. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    NASA Astrophysics Data System (ADS)

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  5. 20 CFR 225.15 - Overall Minimum PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Security Act based on combined railroad and social security earnings. The Overall Minimum PIA is used in computing the social security overall minimum guaranty amount. The overall minimum guaranty rate annuity... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Employee, Spouse and Divorced Spouse Annuities § 225...

  6. COMAP: a new computational interpretation of human movement planning level based on coordinated minimum angle jerk policies and six universal movement elements.

    PubMed

    Emadi Andani, Mehran; Bahrami, Fariba

    2012-10-01

    Flash and Hogan (1985) suggested that the CNS employs a minimum jerk strategy when planning any given movement. Later, Nakano et al. (1999) showed that minimum angle jerk predicts the actual arm trajectory curvature better than the minimum jerk model. Friedman and Flash (2009) confirmed this claim. Besides the behavioral support that we will discuss, we will show that this model allows simplicity in planning any given movement. In particular, we prove mathematically that each movement that satisfies the minimum joint angle jerk condition is reproducible by a linear combination of six functions. These functions are calculated independent of the type of the movement and are normalized in the time domain. Hence, we call these six universal functions the Movement Elements (ME). We also show that the kinematic information at the beginning and end of the movement determines the coefficients of the linear combination. On the other hand, in analyzing recorded data from sit-to-stand (STS) transfer, arm-reaching movement (ARM) and gait, we observed that minimum joint angle jerk condition is satisfied only during different successive phases of these movements and not for the entire movement. Driven by these observations, we assumed that any given ballistic movement may be decomposed into several successive phases without overlap, such that for each phase the minimum joint angle jerk condition is satisfied. At the boundaries of each phase the angular acceleration of each joint should obtain its extremum (zero third derivative). As a consequence, joint angles at each phase will be linear combinations of the introduced MEs. Coefficients of the linear combination at each phase are the values of the joint kinematics at the boundaries of that phase. Finally, we conclude that these observations may constitute the basis of a computational interpretation, put differently, of the strategy used by the Central Nervous System (CNS) for motor planning. We call this possible interpretation "Coordinated Minimum Angle jerk Policy" or COMAP. Based on this policy, the function of the CNS in generating the desired pattern of any given task (like STS, ARM or gait) can be described computationally using three factors: (1) the kinematics of the motor system at given body states, i.e., at certain movement events/instances, (2) the time length of each phase, and (3) the proposed MEs. From a computational point of view, this model significantly simplifies the processes of movement planning as well as feature abstraction for saving characterizing information of any given movement in memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A 20-year period of orthotopic liver transplantation activity in a single center: a time series analysis performed using the R Statistical Software.

    PubMed

    Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U

    2009-05-01

    In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.

  8. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  9. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  10. Energy consumption program: A computer model simulating energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  11. Connectivity ranking of heterogeneous random conductivity models

    NASA Astrophysics Data System (ADS)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  12. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  13. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  14. Numerical Computation of Homogeneous Slope Stability

    PubMed Central

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927

  15. Numerical computation of homogeneous slope stability.

    PubMed

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  16. Using Testbanking To Implement Classroom Management/Extension through the Use of Computers.

    ERIC Educational Resources Information Center

    Thommen, John D.

    Testbanking provides teachers with an effective, low-cost, time-saving opportunity to improve the testing aspect of their classes. Testbanking, which involves the use of a testbank program and a computer, allows teachers to develop and generate tests and test-forms with a minimum of effort. Teachers who test using true and false, multiple choice,…

  17. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. Driver face tracking using semantics-based feature of eyes on single FPGA

    NASA Astrophysics Data System (ADS)

    Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming

    2017-06-01

    Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.

  19. An analysis of potential water availability from the Atwood, Leesville, and Tappan Lakes in the Muskingum River Watershed, Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2013-01-01

    This report presents the results of a study to assess potential water availability from the Atwood, Leesville, and Tappan Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for the Atwood Lake to 73 calendar years for the Leesville and Tappan Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October and February. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.

  20. 20 CFR 404.260 - Special minimum primary insurance amounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... 404.260 Section 404.260 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary... compute your primary insurance amount, if the special minimum primary insurance amount described in § 404...

  1. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  2. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less

  3. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    PubMed Central

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  4. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  5. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  6. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate... second month after the month the child's disability ends, if the child is 18 years old or older, and not...

  7. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  8. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    PubMed

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  9. Computational analysis of particle reinforced viscoelastic polymer nanocomposites - statistical study of representative volume element

    NASA Astrophysics Data System (ADS)

    Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine

    2018-05-01

    The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.

  10. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  11. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  12. Time-domain wavefield reconstruction inversion

    NASA Astrophysics Data System (ADS)

    Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan

    2017-12-01

    Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.

  13. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  14. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  15. Alzheimer Classification Using a Minimum Spanning Tree of High-Order Functional Network on fMRI Dataset

    PubMed Central

    Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926

  16. Aircraft symmetric flight optimization. [gradient techniques for supersonic aircraft control

    NASA Technical Reports Server (NTRS)

    Falco, M.; Kelley, H. J.

    1973-01-01

    Review of the development of gradient techniques and their application to aircraft optimal performance computations in the vertical plane of flight. Results obtained using the method of gradients are presented for attitude- and throttle-control programs which extremize the fuel, range, and time performance indices subject to various trajectory and control constraints, including boundedness of engine throttle control. A penalty function treatment of state inequality constraints which generally appear in aircraft performance problems is outlined. Numerical results for maximum-range, minimum-fuel, and minimum-time climb paths for a hypothetical supersonic turbojet interceptor are presented and discussed. In addition, minimum-fuel climb paths subject to various levels of ground overpressure intensity constraint are indicated for a representative supersonic transport. A variant of the Gel'fand-Tsetlin 'method of ravines' is reviewed, and two possibilities for further development of continuous gradient processes are cited - namely, a projection version of conjugate gradients and a curvilinear search.

  17. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  18. Computational and experimental studies of LEBUs at high device Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Bertelrud, Arild; Watson, R. D.

    1988-01-01

    The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.

  19. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P.; Purdue University, West Lafayette, Indiana 47907; Verma, K.

    Borazine is isoelectronic with benzene and is popularly referred to as inorganic benzene. The study of non-covalent interactions with borazine and comparison with its organic counterpart promises to show interesting similarities and differences. The motivation of the present study of the borazine-water interaction, for the first time, stems from such interesting possibilities. Hydrogen-bonded complexes of borazine and water were studied using matrix isolation infrared spectroscopy and quantum chemical calculations. Computations were performed at M06-2X and MP2 levels of theory using 6-311++G(d,p) and aug-cc-pVDZ basis sets. At both the levels of theory, the complex involving an N–H⋯O interaction, where the N–Hmore » of borazine serves as the proton donor to the oxygen of water was found to be the global minimum, in contrast to the benzene-water system, which showed an H–π interaction. The experimentally observed infrared spectra of the complexes corroborated well with our computations for the complex corresponding to the global minimum. In addition to the global minimum, our computations also located two local minima on the borazine-water potential energy surface. Of the two local minima, one corresponded to a structure where the water was the proton donor to the nitrogen of borazine, approaching the borazine ring from above the plane of the ring; a structure that resembled the global minimum in the benzene-water H–π complex. The second local minimum corresponded to an interaction of the oxygen of water with the boron of borazine, which can be termed as the boron bond. Clearly the borazine-water system presents a richer landscape than the benzene-water system.« less

  1. 33 CFR 401.20 - Automatic Identification System.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Recommendation M.1371-1: 2000, Technical Characteristics For A Universal Shipborne AIS Using Time Division... power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD... AIS position reports using differential GPS corrections from the U.S. and Canadian Coast Guards...

  2. 33 CFR 401.20 - Automatic Identification System.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Recommendation M.1371-1: 2000, Technical Characteristics For A Universal Shipborne AIS Using Time Division... power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD... AIS position reports using differential GPS corrections from the U.S. and Canadian Coast Guards...

  3. 33 CFR 401.20 - Automatic Identification System.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Recommendation M.1371-1: 2000, Technical Characteristics For A Universal Shipborne AIS Using Time Division... power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD... AIS position reports using differential GPS corrections from the U.S. and Canadian Coast Guards...

  4. 33 CFR 401.20 - Automatic Identification System.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Recommendation M.1371-1: 2000, Technical Characteristics For A Universal Shipborne AIS Using Time Division... power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD... AIS position reports using differential GPS corrections from the U.S. and Canadian Coast Guards...

  5. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  6. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  7. A multidimensional finite element method for CFD

    NASA Technical Reports Server (NTRS)

    Pepper, Darrell W.; Humphrey, Joseph W.

    1991-01-01

    A finite element method is used to solve the equations of motion for 2- and 3-D fluid flow. The time-dependent equations are solved explicitly using quadrilateral (2-D) and hexahedral (3-D) elements, mass lumping, and reduced integration. A Petrov-Galerkin technique is applied to the advection terms. The method requires a minimum of computational storage, executes quickly, and is scalable for execution on computer systems ranging from PCs to supercomputers.

  8. Parametric study of minimum converter loss in an energy-storage dc-to-dc converter

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1982-01-01

    Through a combination of analytical and numerical minimization procedures, a converter design that results in the minimum total converter loss (including core loss, winding loss, capacitor and energy-storage-reactor loss, and various losses in the semiconductor switches) is obtained. Because the initial phase involves analytical minimization, the computation time required by the subsequent phase of numerical minimization is considerably reduced in this combination approach. The effects of various loss parameters on the optimum values of the design variables are also examined.

  9. About neighborhood counting measure metric and minimum risk metric.

    PubMed

    Argentini, Andrea; Blanzieri, Enrico

    2010-04-01

    In a 2006 TPAMI paper, Wang proposed the Neighborhood Counting Measure, a similarity measure for the k-NN algorithm. In his paper, Wang mentioned the Minimum Risk Metric (MRM), an early distance measure based on the minimization of the risk of misclassification. Wang did not compare NCM to MRM because of its allegedly excessive computational load. In this comment paper, we complete the comparison that was missing in Wang's paper and, from our empirical evaluation, we show that MRM outperforms NCM and that its running time is not prohibitive as Wang suggested.

  10. Simulator study of minimum acceptable level of longitudinal stability for a representative STOL configuration during landing approach

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Deal, P. L.

    1974-01-01

    A fixed-base simulator study was conducted to determine the minimum acceptable level of longitudinal stability for a representative turbofan STOL (short take-off and landing) transport airplane during the landing approach. Real-time digital simulation techniques were used. The computer was programed with equations of motion for six degrees of freedom, and the aerodynamic inputs were based on measured wind-tunnel data. The primary piloting task was an instrument approach to a breakout at a 60-m (200-ft) ceiling.

  11. Recommendations on Model Fidelity for Wind Turbine Gearbox Simulations; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.; Lacava, W.; Austin, J.

    2015-02-01

    This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.

  12. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  13. Computer aided drug design

    NASA Astrophysics Data System (ADS)

    Jain, A.

    2017-08-01

    Computer based method can help in discovery of leads and can potentially eliminate chemical synthesis and screening of many irrelevant compounds, and in this way, it save time as well as cost. Molecular modeling systems are powerful tools for building, visualizing, analyzing and storing models of complex molecular structure that can help to interpretate structure activity relationship. The use of various techniques of molecular mechanics and dynamics and software in Computer aided drug design along with statistics analysis is powerful tool for the medicinal chemistry to synthesis therapeutic and effective drugs with minimum side effect.

  14. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  15. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  16. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  17. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  18. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  19. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.

  20. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  1. VizieR Online Data Catalog: Evolution of solar irradiance during Holocene (Vieira+, 2011)

    NASA Astrophysics Data System (ADS)

    Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.

    2011-05-01

    This is a composite total solar irradiance (TSI) time series for 9495BC to 2007AD constructed as described in Sect. 3.3 of the paper. Since the TSI is the main external heat input into the Earth's climate system, a consistent record covering as long period as possible is needed for climate models. This was our main motivation for constructing this composite TSI time series. In order to produce a representative time series, we divided the Holocene into four periods according to the available data for each period. Table 4 (see below) summarizes the periods considered and the models available for each period. After the end of the Maunder Minimum we compute daily values, while prior to the end of the Maunder Minimum we compute 10-year averages. For the period for which both solar disk magnetograms and continuum images are available (period 1) we employ the SATIRE-S reconstruction (Krivova et al. 2003A&A...399L...1K; Wenzler et al. 2006A&A...460..583W). SATIRE-T (Krivova et al. 2010JGRA..11512112K) reconstruction is used from the beginning of the Maunder Minimum (approximately 1640AD) to 1977AD. Prior to 1640AD reconstructions are based on cosmogenic isotopes (this paper). Different models of the Earth's geomagnetic field are available before and after approximately 5000BC. Therefore we treat periods 3 and 4 (before and after 5000BC) separately. Further details can be found in the paper. We emphasize that the reconstructions based on different proxies have different time resolutions. (1 data file).

  2. An analysis of potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes in the Muskingum River Watershed, Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2014-01-01

    This report presents the results of a study to assess potential water availability from the Charles Mill, Clendening, Piedmont, Pleasant Hill, Senecaville, and Wills Creek Lakes, located within the Muskingum River Watershed, Ohio. The assessment was based on the criterion that water withdrawals should not appreciably affect maintenance of recreation-season pool levels in current use. To facilitate and simplify the assessment, it was assumed that historical lake operations were successful in maintaining seasonal pool levels, and that any discharges from lakes constituted either water that was discharged to prevent exceeding seasonal pool levels or discharges intended to meet minimum in-stream flow targets downstream from the lakes. It further was assumed that the volume of water discharged in excess of the minimum in-stream flow target is available for use without negatively impacting seasonal pool levels or downstream water uses and that all or part of it is subject to withdrawal. Historical daily outflow data for the lakes were used to determine the quantity of water that potentially could be withdrawn and the resulting quantity of water that would flow downstream (referred to as “flow-by”) on a daily basis as a function of all combinations of three hypothetical target minimum flow-by amounts (1, 2, and 3 times current minimum in-stream flow targets) and three pumping capacities (1, 2, and 3 million gallons per day). Using both U.S. Geological Survey streamgage data (where available) and lake-outflow data provided by the U.S. Army Corps of Engineers resulted in analytical periods ranging from 51 calendar years for Charles Mill, Clendening, and Piedmont Lakes to 74 calendar years for Pleasant Hill, Senecaville, and Wills Creek Lakes. The observed outflow time series and the computed time series of daily flow-by amounts and potential withdrawals were analyzed to compute and report order statistics (95th, 75th, 50th, 25th, 10th, and 5th percentiles) and means for the analytical period, in aggregate, and broken down by calendar month. In addition, surplus-water mass curve data were tabulated for each of the lakes. Monthly order statistics of computed withdrawals indicated that, for the three pumping capacities considered, increasing the target minimum flow-by amount tended to reduce the amount of water that can be withdrawn. The reduction was greatest in the lower percentiles of withdrawal; however, increasing the flow-by amount had no impact on potential withdrawals during high flow. In addition, for a given target minimum flow-by amount, increasing the pumping rate typically increased the total amount of water that could be withdrawn; however, that increase was less than a direct multiple of the increase in pumping rate for most flow statistics. Potential monthly withdrawals were observed to be more variable and more limited in some calendar months than others. Monthly order statistics and means of computed daily mean flow-by amounts indicated that flow-by amounts generally tended to be lowest during June–October. Increasing the target minimum flow-by amount for a given pumping rate resulted in some small increases in the magnitudes of the mean and 50th percentile and lower order statistics of computed mean flow-by, but had no effect on the magnitudes of the higher percentile statistics. Increasing the pumping rate for a given target minimum flow-by amount resulted in decreases in magnitudes of higher-percentile flow-by statistics by an amount equal to the flow equivalent of the increase in pumping rate; however, some lower percentile statistics remained unchanged.

  3. Evaluating Technology Integration in the Elementary School: A Site-Based Approach.

    ERIC Educational Resources Information Center

    Mowe, Richard

    This book enables educators at the elementary level to conduct formative evaluations of their technology programs in minimum time. Most of the technology is computer related, including word processing, graphics, desktop publishing, spreadsheets, databases, instructional software, programming, and telecommunications. The design of the book is aimed…

  4. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  5. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  6. Computerisation of diabetic clinic records.

    PubMed Central

    Watkins, G B; Sutcliffe, T; Pyke, D A; Watkins, P J

    1980-01-01

    A simple system for putting diabetic records on a computer file is achieved by using stationery that combines the usual handwritten records (not computerised) with the minimum of essential data suitable for punching on to computer tape. The record may be brought up to date at a selected time time interval. This simple, cheap system has been in use in a busy clinic for six years. The information on about 8000 diabetics now held in the computer file is used chiefly to help research by creating registers of patients with specified characteristics, such as treatment, heredity complications, and pregnancy. A complete up-to-date index of the entire clinic population is always available, and routine clinic statistics are returned every six months. PMID:7437814

  7. Minimum Conflict Mainstreaming.

    ERIC Educational Resources Information Center

    Awen, Ed; And Others

    Computer technology is discussed as a tool for facilitating the implementation of the mainstreaming process. Minimum conflict mainstreaming/merging (MCM) is defined as an approach which utilizes computer technology to circumvent such structural obstacles to mainstreaming as transportation scheduling, screening and assignment of students, testing,…

  8. Dependence of the quantum speed limit on system size and control complexity

    NASA Astrophysics Data System (ADS)

    Lee, Juneseo; Arenz, Christian; Rabitz, Herschel; Russell, Benjamin

    2018-06-01

    We extend the work in 2017 New J. Phys. 19 103015 by deriving a lower bound for the minimum time necessary to implement a unitary transformation on a generic, closed quantum system with an arbitrary number of classical control fields. This bound is explicitly analyzed for a specific N-level system similar to those used to represent simple models of an atom, or the first excitation sector of a Heisenberg spin chain, both of which are of interest in quantum control for quantum computation. Specifically, it is shown that the resultant bound depends on the dimension of the system, and on the number of controls used to implement a specific target unitary operation. The value of the bound determined numerically, and an estimate of the true minimum gate time are systematically compared for a range of system dimension and number of controls; special attention is drawn to the relationship between these two variables. It is seen that the bound captures the scaling of the minimum time well for the systems studied, and quantitatively is correct in the order of magnitude.

  9. Design and Analysis of Scheduling Policies for Real-Time Computer Systems

    DTIC Science & Technology

    1992-01-01

    C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical

  10. Nonlinear Prediction As A Tool For Determining Parameters For Phase Space Reconstruction In Meteorology

    NASA Astrophysics Data System (ADS)

    Miksovsky, J.; Raidl, A.

    Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.

  11. NASA/ESA CV-990 Spacelab simulation. Appendixes: C, data-handling: Planning and implementation; D, communications; E, mission documentation

    NASA Technical Reports Server (NTRS)

    Reller, J. O., Jr.

    1976-01-01

    Data handling, communications, and documentation aspects of the ASSESS mission are described. Most experiments provided their own data handling equipment, although some used the airborne computer for backup, and one experiment required real-time computations. Communications facilities were set up to simulate those to be provided between Spacelab and the ground, including a downlink TV system. Mission documentation was kept to a minimum and proved sufficient. Examples are given of the basic documents of the mission.

  12. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  13. 26 CFR 53.4942(a)-2 - Computation of undistributed income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... any taxable year as of any time, the amount by which: (1) The distributable amount (as defined in paragraph (b) of this section) for such taxable year, exceeds (2) The qualifying distributions (as defined...: (i) For taxable years beginning before January 1, 1982, an amount equal to the greater of the minimum...

  14. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  15. 12 CFR 615.5330 - Minimum surplus ratios.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) and weighted on the basis of risk in accordance with § 615.5210. (b) Core surplus. (1) Each institution shall achieve and at all times maintain a ratio of core surplus to the risk-adjusted asset base of... otherwise includible pursuant to § 615.5301(b). (2) Each association shall compute its core surplus ratio by...

  16. 20 CFR 229.43 - When a divorced spouse can no longer be included in computing an annuity under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... included in computing an annuity under the overall minimum. A divorced spouse's inclusion in the... spouse becomes entitled to a retirement or disability benefit under the Social Security Act based upon a...

  17. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  19. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  20. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  1. Computing smallest intervention strategies for multiple metabolic networks in a boolean model.

    PubMed

    Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya

    2015-02-01

    This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.

  2. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  3. 26 CFR 1.55-1 - Alternative minimum taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative minimum taxable income. 1.55-1... TAXES Tax Surcharge § 1.55-1 Alternative minimum taxable income. (a) General rule for computing alternative minimum taxable income. Except as otherwise provided by statute, regulations, or other published...

  4. Effect of a localized minimum in equatorial field strength on resistive tearing instability in the geomagnetotail

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hau, L.N.; Wolf, R.A.

    A two-dimensional, resistive-MHD computer code is used to investigate the spontaneous reconnection of magnetotaillike configurations. The initial conditions adopted in the simulations are of two types: (1) in which the equatorial normal magnetic field component B{sub ze} declines monotonically down the tail, and (2) in which B{sub ze} exhibits a deep minimum in the near-earth plasma sheet. Configurations of the second type have been suggested by Erickson (1984, 1985) to be the inevitable result of adiabatic, earthward convection of the plasma sheet. To represent the case where the earthward convection stops before the X line forms, i.e., the case wheremore » the interplanetary magnetic field turns northward after a period of southward orientation, the authors impose zero-flow boundary conditions at the edges of the computational box. The initial configurations are in equilibrium and stable within ideal MHD. The dynamic evolution of the system starts after the resistivity is turned on. The main results of the simulations basically support the neutral-line model of substorms and confirm Birn's (1980) computer studies. Specifically, they find spontaneous formation of an X-type neutral point and a single O-type plasmoid with strong tailward flow on the tailward side of the X point. in addition, the results show that the formation of the X point for the configurations of type 2 is clearly associated with the assumed initial B{sub z} minimum. Furthermore, the time interval from trablurning on of the resistivity to the formation of a plasmoid is much shorter in the case where there is an initial deep minimum.« less

  5. A preliminary design for flight testing the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1986-01-01

    This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.

  6. Minimum-domain impulse theory for unsteady aerodynamic force

    NASA Astrophysics Data System (ADS)

    Kang, L. L.; Liu, L. Q.; Su, W. D.; Wu, J. Z.

    2018-01-01

    We extend the impulse theory for unsteady aerodynamics from its classic global form to finite-domain formulation then to minimum-domain form and from incompressible to compressible flows. For incompressible flow, the minimum-domain impulse theory raises the finding of Li and Lu ["Force and power of flapping plates in a fluid," J. Fluid Mech. 712, 598-613 (2012)] to a theorem: The entire force with discrete wake is completely determined by only the time rate of impulse of those vortical structures still connecting to the body, along with the Lamb-vector integral thereof that captures the contribution of all the rest disconnected vortical structures. For compressible flows, we find that the global form in terms of the curl of momentum ∇ × (ρu), obtained by Huang [Unsteady Vortical Aerodynamics (Shanghai Jiaotong University Press, 1994)], can be generalized to having an arbitrary finite domain, but the formula is cumbersome and in general ∇ × (ρu) no longer has discrete structures and hence no minimum-domain theory exists. Nevertheless, as the measure of transverse process only, the unsteady field of vorticity ω or ρω may still have a discrete wake. This leads to a minimum-domain compressible vorticity-moment theory in terms of ρω (but it is beyond the classic concept of impulse). These new findings and applications have been confirmed by our numerical experiments. The results not only open an avenue to combine the theory with computation-experiment in wide applications but also reveal a physical truth that it is no longer necessary to account for all wake vortical structures in computing the force and moment.

  7. On the numerical solution of the dynamically loaded hydrodynamic lubrication of the point contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.

    1990-01-01

    The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.

  8. Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.

    PubMed

    Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron

    2017-10-21

    The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.

  9. Application of wildfire simulation models for risk analysis

    Treesearch

    Alan A. Ager; Mark A. Finney

    2009-01-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of...

  10. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture.

    PubMed

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-22

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  11. Universal Quantum Computing with Measurement-Induced Continuous-Variable Gate Sequence in a Loop-Based Architecture

    NASA Astrophysics Data System (ADS)

    Takeda, Shuntaro; Furusawa, Akira

    2017-09-01

    We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.

  12. Optimal Trajectories For Orbital Transfers Using Low And Medium Thrust Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1992-01-01

    For many problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well-known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs which will require more revolutions, thus making the low thrust transfer computational intensive. Here, we consider optimal low and medium thrust orbital transfers.

  13. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  14. Simple geometric algorithms to aid in clearance management for robotic mechanisms

    NASA Technical Reports Server (NTRS)

    Copeland, E. L.; Ray, L. D.; Peticolas, J. D.

    1981-01-01

    Global geometric shapes such as lines, planes, circles, spheres, cylinders, and the associated computational algorithms which provide relatively inexpensive estimates of minimum spatial clearance for safe operations were selected. The Space Shuttle, remote manipulator system, and the Power Extension Package are used as an example. Robotic mechanisms operate in quarters limited by external structures and the problem of clearance is often of considerable interest. Safe clearance management is simple and suited to real time calculation, whereas contact prediction requires more precision, sophistication, and computational overhead.

  15. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  16. Scheduling algorithms for automatic control systems for technological processes

    NASA Astrophysics Data System (ADS)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.

    2017-01-01

    Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.

  17. Improving computer security for authentication of users: influence of proactive password restrictions.

    PubMed

    Proctor, Robert W; Lien, Mei-Ching; Vu, Kim-Phuong L; Schultz, E Eugene; Salvendy, Gavriel

    2002-05-01

    Entering a username-password combination is a widely used procedure for identification and authentication in computer systems. However, it is a notoriously weak method, in that the passwords adopted by many users are easy to crack. In an attempt to improve security, proactive password checking may be used, in which passwords must meet several criteria to be more resistant to cracking. In two experiments, we examined the influence of proactive password restrictions on the time that it took to generate an acceptable password and to use it subsequently to long in. The required length was a minimum of five characters in Experiment 1 and eight characters in Experiment 2. In both experiments, one condition had only the length restriction, and the other had additional restrictions. The additional restrictions greatly increased the time it took to generate the password but had only a small effect on the time it took to use it subsequently to long in. For the five-character passwords, 75% were cracked when no other restrictions were imposed, and this was reduced to 33% with the additional restrictions. For the eight-character passwords, 17% were cracked with no other restrictions, and 12.5% with restrictions. The results indicate that increasing the minimum character length reduces crackability and increases security, regardless of whether additional restrictions are imposed.

  18. Experimental validation of pulsed column inventory estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may bemore » an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs.« less

  19. Study of cryogenic propellant systems for loading the space shuttle. Part 2: Hydrogen systems

    NASA Technical Reports Server (NTRS)

    Steward, W. G.

    1975-01-01

    Computer simulation studies of liquid hydrogen fill and vent systems for the space shuttle are studied. The computer programs calculate maximum and minimum permissible flow rates during cooldown as limited by thermal stress considerations, fill line cooldown time, pressure drop, flow rates, vapor content, vent line pressure drop and vent line discharge temperature. The input data for these programs are selected through graphic displays which schematically depict the part of the system being analyzed. The computed output is also displayed in the form of printed messages and graphs. Digital readouts of graph coordinates may also be obtained. Procedures are given for operation of the graphic display unit and the associated minicomputer and timesharing computer.

  20. A Scheme for Short-Term Prediction of Hydrometeors Using Advection and Physical Forcing.

    DTIC Science & Technology

    1984-07-01

    D.A. Lowry, 1978: Use of a real - time computer graphics system for diagnosis and forecasting . Preprints, Conf. on Wes. Forecasting and Analysis and...28 Figure 4.2.1. Graph for forecasting the night minimum temperature from observations at 1800-2000 local time . From Zverev (1972...3u 1. 2 much weather is produced by organized systems that translate, and forecast gains were made through use of the concepts of steering

  1. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  2. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    PubMed

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  3. MIST - MINIMUM-STATE METHOD FOR RATIONAL APPROXIMATION OF UNSTEADY AERODYNAMIC FORCE COEFFICIENT MATRICES

    NASA Technical Reports Server (NTRS)

    Karpel, M.

    1994-01-01

    Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.

  4. Shortened Mean Transit Time in CT Perfusion With Singular Value Decomposition Analysis in Acute Cerebral Infarction: Quantitative Evaluation and Comparison With Various CT Perfusion Parameters.

    PubMed

    Murayama, Kazuhiro; Katada, Kazuhiro; Hayakawa, Motoharu; Toyama, Hiroshi

    We aimed to clarify the cause of shortened mean transit time (MTT) in acute ischemic cerebrovascular disease and examined its relationship with reperfusion. Twenty-three patients with acute ischemic cerebrovascular disease underwent whole-brain computed tomography perfusion (CTP). The maximum MTT (MTTmax), minimum MTT (MTTmin), ratio of maximum and minimum MTT (MTTmin/max), and minimum cerebral blood volume (CBV) (CBVmin) were measured by automatic region of interest analysis. Diffusion weighted image was performed to calculate infarction volume. We compared these CTP parameters between reperfusion and nonreperfusion groups and calculated correlation coefficients between the infarction core volume and CTP parameters. Significant differences were observed between reperfusion and nonreperfusion groups (MTTmin/max: P = 0.014; CBVmin ratio: P = 0.038). Regression analysis of CTP and high-intensity volume on diffusion weighted image showed negative correlation (CBVmin ratio: r = -0.41; MTTmin/max: r = -0.30; MTTmin ratio: r = -0.27). A region of shortened MTT indicated obstructed blood flow, which was attributed to the singular value decomposition method error.

  5. 26 CFR 1.383-2 - Limitations on certain capital losses and excess credits in computing alternative minimum tax...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Limitations on certain capital losses and excess credits in computing alternative minimum tax. [Reserved] 1.383-2 Section 1.383-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Insolvency...

  6. 12 CFR Appendix A to Subpart A of... - Minimum Capital Components for Interest Rate and Foreign Exchange Rate Contracts

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... interest rate and foreign exchange rate contracts are computed on the basis of the credit equivalent amounts of such contracts. Credit equivalent amounts are computed for each of the following off-balance... Equivalent Amounts a. The minimum capital components for interest rate and foreign exchange rate contracts...

  7. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  8. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  9. Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model

    PubMed Central

    Lu, Wei; Song, Jiangning; Akutsu, Tatsuya

    2015-01-01

    Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199

  10. UNUSUAL TRENDS IN SOLAR P-MODE FREQUENCIES DURING THE CURRENT EXTENDED MINIMUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathy, S. C.; Jain, K.; Hill, F.

    2010-03-10

    We investigate the behavior of the intermediate-degree mode frequencies of the Sun during the current extended minimum phase to explore the time-varying conditions in the solar interior. Using contemporaneous helioseismic data from the Global Oscillation Network Group (GONG) and the Michelson Doppler Imager (MDI), we find that the changes in resonant mode frequencies during the activity minimum period are significantly greater than the changes in solar activity as measured by different proxies. We detect a seismic minimum in MDI p-mode frequency shifts during 2008 July-August but no such signature is seen in mean shifts computed from GONG frequencies. We alsomore » analyze the frequencies of individual oscillation modes from GONG data as a function of latitude and observe a signature of the onset of the solar cycle 24 in early 2009. Thus, the intermediate-degree modes do not confirm the onset of the cycle 24 during late 2007 as reported from the analysis of the low-degree Global Oscillations at Low Frequency frequencies. Further, both the GONG and MDI frequencies show a surprising anti-correlation between frequencies and activity proxies during the current minimum, in contrast to the behavior during the minimum between cycles 22 and 23.« less

  11. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  12. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  13. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  14. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Noise tolerant dendritic lattice associative memories

    NASA Astrophysics Data System (ADS)

    Ritter, Gerhard X.; Schmalz, Mark S.; Hayden, Eric; Tucker, Marc

    2011-09-01

    Linear classifiers based on computation over the real numbers R (e.g., with operations of addition and multiplication) denoted by (R, +, x), have been represented extensively in the literature of pattern recognition. However, a different approach to pattern classification involves the use of addition, maximum, and minimum operations over the reals in the algebra (R, +, maximum, minimum) These pattern classifiers, based on lattice algebra, have been shown to exhibit superior information storage capacity, fast training and short convergence times, high pattern classification accuracy, and low computational cost. Such attributes are not always found, for example, in classical neural nets based on the linear inner product. In a special type of lattice associative memory (LAM), called a dendritic LAM or DLAM, it is possible to achieve noise-tolerant pattern classification by varying the design of noise or error acceptance bounds. This paper presents theory and algorithmic approaches for the computation of noise-tolerant lattice associative memories (LAMs) under a variety of input constraints. Of particular interest are the classification of nonergodic data in noise regimes with time-varying statistics. DLAMs, which are a specialization of LAMs derived from concepts of biological neural networks, have successfully been applied to pattern classification from hyperspectral remote sensing data, as well as spatial object recognition from digital imagery. The authors' recent research in the development of DLAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification applications. Performance results are presented in terms of measured computational cost, noise tolerance, classification accuracy, and throughput for a variety of input data and noise levels.

  16. 25 CFR 542.14 - What are the minimum internal control standards for the cage?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for the cage? 542.14 Section 542.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for the cage? (a) Computer applications. For...

  17. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  18. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  19. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  20. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  1. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  2. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  3. THE EFFECT OF A DYNAMIC INNER HELIOSHEATH THICKNESS ON COSMIC-RAY MODULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, R.; Ferreira, S. E. S.; Potgieter, M. S., E-mail: rexmanuel@live.com

    2015-02-01

    The time-dependent modulation of galactic cosmic rays in the heliosphere is studied over different polarity cycles by computing 2.5 GV proton intensities using a two-dimensional, time-dependent modulation model. By incorporating recent theoretical advances in the relevant transport parameters in the model, we showed in previous work that this approach gave realistic computed intensities over a solar cycle. New in this work is that a time dependence of the solar wind termination shock (TS) position is implemented in our model to study the effect of a dynamic inner heliosheath thickness (the region between the TS and heliopause) on the solar modulationmore » of galactic cosmic rays. The study reveals that changes in the inner heliosheath thickness, arising from a time-dependent shock position, does affect cosmic-ray intensities everywhere in the heliosphere over a solar cycle, with the smallest effect in the innermost heliosphere. A time-dependent TS position causes a phase difference between the solar activity periods and the corresponding intensity periods. The maximum intensities in response to a solar minimum activity period are found to be dependent on the time-dependent TS profile. It is found that changing the width of the inner heliosheath with time over a solar cycle can shift the time of when the maximum or minimum cosmic-ray intensities occur at various distances throughout the heliosphere, but more significantly in the outer heliosphere. The time-dependent extent of the inner heliosheath, as affected by solar activity conditions, is thus an additional time-dependent factor to be considered in the long-term modulation of cosmic rays.« less

  4. A computer-aided design system geared toward conceptual design in a research environment. [for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    STACK S. H.

    1981-01-01

    A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.

  5. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  6. Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster.

    PubMed

    Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S

    2017-05-28

    We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au 147 ), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au 147 , and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au 147 is performed, and it is concluded that Au 147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.

  7. Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster

    NASA Astrophysics Data System (ADS)

    Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S.

    2017-05-01

    We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au147), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au147, and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au147 is performed, and it is concluded that Au147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.

  8. Economic policy and the double burden of malnutrition: cross-national longitudinal analysis of minimum wage and women's underweight and obesity.

    PubMed

    Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody

    2018-04-01

    To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.

  9. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  10. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  11. 26 CFR 1.6655-3 - Adjusted seasonal installment method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1... under § 1.6655-2 apply to the computation of taxable income (and resulting tax) for purposes of... applying to alternative minimum taxable income, tentative minimum tax, and alternative minimum tax, the...

  12. 5 CFR 844.303 - Minimum disability annuity.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Minimum disability annuity. 844.303... Annuity § 844.303 Minimum disability annuity. Notwithstanding any other provision of this part, an annuity payable under this part cannot be less than the amount of an annuity computed under 5 U.S.C. 8415...

  13. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  14. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  15. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  16. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  17. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  18. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  19. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  20. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  1. Development of a Nonequilibrium Radiative Heating Prediction Method for Coupled Flowfield Solutions

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiative heating and coupling effects in nonequilibrium flow-fields has been developed. The method resolves atomic lines with a minimum number of spectral points, and treats molecular radiation using the smeared band approximation. To further minimize computational time, the calculation is performed on an optimized spectrum, which is computed for each flow condition to enhance spectral resolution. Additional time savings are obtained by performing the radiation calculation on a subgrid optimally selected for accuracy. Representative results from the new method are compared to previous work to demonstrate that the speedup does not cause a loss of accuracy and is sufficient to make coupled solutions practical. The method is found to be a useful tool for studies of nonequilibrium flows.

  2. Effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy

    1987-01-01

    The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.

  3. Computational strategies in the dynamic simulation of constrained flexible MBS

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.

    1993-01-01

    This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.

  4. Computer-assisted diagnosis of melanoma.

    PubMed

    Fuller, Collin; Cellura, A Paul; Hibler, Brian P; Burris, Katy

    2016-03-01

    The computer-assisted diagnosis of melanoma is an exciting area of research where imaging techniques are combined with diagnostic algorithms in an attempt to improve detection and outcomes for patients with skin lesions suspicious for malignancy. Once an image has been acquired, it undergoes a processing pathway which includes preprocessing, enhancement, segmentation, feature extraction, feature selection, change detection, and ultimately classification. Practicality for everyday clinical use remains a vital question. A successful model must obtain results that are on par or outperform experienced dermatologists, keep costs at a minimum, be user-friendly, and be time efficient with high sensitivity and specificity. ©2015 Frontline Medical Communications.

  5. Rapid Countermeasure Discovery against Francisella tularensis Based on a Metabolic Network Reconstruction

    DTIC Science & Technology

    2013-05-21

    minimum inhibitory concentrations and mammalian cell cytotoxicities. The most promising compound had a low molecular weight, was non-toxic, and abolished... molecular weight, was non-toxic, and abolished bacterial growth at 13 mM, with putative activity against pantetheine-phosphate adenylyltransferase, an...time period. Metabolic genome-scale models of bacteria have provided a computational framework for in silico simulations to evaluate how metabolic

  6. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  7. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  8. Computing Trimmed, Mean-Camber Surfaces At Minimum Drag

    NASA Technical Reports Server (NTRS)

    Lamar, John E.; Hodges, William T.

    1995-01-01

    VLMD computer program determines subsonic mean-camber surfaces of trimmed noncoplanar planforms with minimum vortex drag at specified lift coefficient. Up to two planforms designed together. Method used that of subsonic vortex lattice method of chord loading specification, ranging from rectangular to triangular, left specified by user. Program versatile and applied to isolated wings, wing/canard configurations, tandem wing, and wing/-winglet configuration. Written in FORTRAN.

  9. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE PAGES

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...

    2017-05-18

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  10. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  11. Estimation of the transmissivity of thin leaky-confined aquifers from single-well pumping tests

    NASA Astrophysics Data System (ADS)

    Worthington, Paul F.

    1981-01-01

    Data from the quasi-equilibrium phases of a step-drawdown test are used to evaluate the coefficient of non-linear head losses subject to the assumption of a constant effective well radius. After applying a well-loss correction to the observed drawdowns of the first step, an approximation method is used to estimate a pseudo-transmissivity of the aquifer from a single value of time-variant drawdown. The pseudo-transmissivities computed for each of a sequence of values of time pass through a minimum when there is least manifestation of casing-storage and leakage effects, phenomena to which pumping-test data of this kind are particularly susceptible. This minimum pseudo-transmissivity, adjusted for partial penetration effects where appropriate, constitutes the best possible estimate of aquifer transmissivity. The ease of application of the overall procedure is illustrated by a practical example.

  12. APSIDAL MOTION AND A LIGHT CURVE SOLUTION FOR 13 LMC ECCENTRIC ECLIPSING BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zasche, P.; Wolf, M.; Vraštil, J.

    2015-12-15

    New CCD observations for 13 eccentric eclipsing binaries from the Large Magellanic Cloud were carried out using the Danish 1.54 m telescope located at the La Silla Observatory in Chile. These systems were observed for their times of minimum and 56 new minima were obtained. These are needed for accurate determination of the apsidal motion. Besides that, in total 436 times of minimum were derived from the photometric databases OGLE and MACHO. The O – C diagrams of minimum timings for these B-type binaries were analyzed and the parameters of the apsidal motion were computed. The light curves of thesemore » systems were fitted using the program PHOEBE, giving the light curve parameters. We derived for the first time relatively short periods of the apsidal motion ranging from 21 to 107 years. The system OGLE-LMC-ECL-07902 was also analyzed using the spectra and radial velocities, resulting in masses of 6.8 and 4.4 M{sub ⊙} for the eclipsing components. For one system (OGLE-LMC-ECL-20112), the third-body hypothesis was also used to describe the residuals after subtraction of the apsidal motion, resulting in a period of about 22 years. For several systems an additional third light was also detected, which makes these systems suspect for triplicity.« less

  13. Extending Asia Pacific bioinformatics into new realms in the "-omics" era.

    PubMed

    Ranganathan, Shoba; Eisenhaber, Frank; Tong, Joo Chuan; Tan, Tin Wee

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation dating back to 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 7-11, 2009 at Biopolis, Singapore. Besides bringing together scientists from the field of bioinformatics in this region, InCoB has actively engaged clinicians and researchers from the area of systems biology, to facilitate greater synergy between these two groups. InCoB2009 followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India), Hong Kong and Taipei (Taiwan), with InCoB2010 scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. The Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and symposia on Clinical Bioinformatics (CBAS), the Singapore Symposium on Computational Biology (SYMBIO) and training tutorials were scheduled prior to the scientific meeting, and provided ample opportunity for in-depth learning and special interest meetings for educators, clinicians and students. We provide a brief overview of the peer-reviewed bioinformatics manuscripts accepted for publication in this supplement, grouped into thematic areas. In order to facilitate scientific reproducibility and accountability, we have, for the first time, introduced minimum information criteria for our pubilcations, including compliance to a Minimum Information about a Bioinformatics Investigation (MIABi). As the regional research expertise in bioinformatics matures, we have delineated a minimum set of bioinformatics skills required for addressing the computational challenges of the "-omics" era.

  14. On Channel-Discontinuity-Constraint Routing in Wireless Networks☆

    PubMed Central

    Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.

    2011-01-01

    Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646

  15. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  16. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  17. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  18. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  19. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  20. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  1. Effect of electromagnetic radiation on the coils used in aneurysm embolization.

    PubMed

    Lv, Xianli; Wu, Zhongxue; Li, Youxiang

    2014-06-01

    This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday's electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study.

  2. Effect of Electromagnetic Radiation on the Coils Used in Aneurysm Embolization

    PubMed Central

    Lv, Xianli; Wu, Zhongxue; Li, Youxiang

    2014-01-01

    Summary This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday’s electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study. PMID:24976203

  3. Hands-on work fine-tunes X-band PIN-diode duplexer

    NASA Astrophysics Data System (ADS)

    Schneider, P.

    1985-06-01

    Computer-aided design (CAD) programs for fabricating PIN-diode duplexers are useful in avoiding time-consuming cut-and-try techniques. Nevertheless, to attain minimum insertion loss, only experimentation yields the optimum microstrip circuitry. A PIN-diode duplexer, consisting of two SPST PIN-diode switches and a pair of 3-dB Lange microstrip couplers, designed for an X-band transmit/receive module exemplifies what is possible when computer-derived designs and experimentation are used together. Differences between the measured and computer-generated figures for insertion loss can be attributed to several factors not included in the CAD program - for example, radiation and connector losses. Mechanical tolerances of the microstrip PC board and variations in the SMA connector-to-microstrip transition contribute to the discrepancy.

  4. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  5. Determination of the Residence Time of Food Particles During Aseptic Sterilization

    NASA Technical Reports Server (NTRS)

    Carl, J. R.; Arndt, G. D.; Nguyen, T. X.

    1994-01-01

    The paper describes a non-invasive method to measure the time an individual particle takes to move through a length of stainless steel pipe. The food product is in two phase flow (liquids and solids) and passes through a pipe with pressures of approximately 60 psig and temperatures of 270-285 F. The proposed problem solution is based on the detection of transitory amplitude and/or phase changes in a microwave transmission path caused by the passage of the particles of interest. The particles are enhanced in some way, as will be discussed later, such that they will provide transitory changes that are distinctive enough not to be mistaken for normal variations in the received signal (caused by the non-homogeneous nature of the medium). Two detectors (transmission paths across the pipe) will be required and place at a known separation. A minimum transit time calculation is made from which the maximum velocity can be determined. This provides the minimum residence time. Also average velocity and statistical variations can be computed so that the amount of 'over-cooking' can be determined.

  6. Communication scheme based on evolutionary spatial 2×2 games

    NASA Astrophysics Data System (ADS)

    Ziaukas, Pranas; Ragulskis, Tautvydas; Ragulskis, Minvydas

    2014-06-01

    A visual communication scheme based on evolutionary spatial 2×2 games is proposed in this paper. Self-organizing patterns induced by complex interactions between competing individuals are exploited for hiding and transmitting secret visual information. Properties of the proposed communication scheme are discussed in details. It is shown that the hiding capacity of the system (the minimum size of the detectable primitives and the minimum distance between two primitives) is sufficient for the effective transmission of digital dichotomous images. Also, it is demonstrated that the proposed communication scheme is resilient to time backwards, plain image attacks and is highly sensitive to perturbations of private and public keys. Several computational experiments are used to demonstrate the effectiveness of the proposed communication scheme.

  7. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  8. Numerical Solutions for a Cylindrical Laser Diffuser Flowfield

    DTIC Science & Technology

    1990-06-01

    exhaust conditions with minimum losses to optimize performance of the system. Thus, the handling of the system of shock waves to decelerate the flow...requirement for exhaustive experimental work will result in significant savings of both time and resources. As more advanced computers are developed, the...Mach number (ɚ.5) flows. Recent interest in hypersonic engine inlet performance has resulted in an extension of the methodology to high Mach number

  9. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  10. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  11. DTWT (Dispersive Tsunami Wave Tool): a new tool for computing the complete dispersion of tsunami travel time.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique

    2017-04-01

    We present a tool for computing the complete arrival times of the dispersed wave-train of a tsunami. The calculus is made using the exact formulation of the tsunami dispersion (and without approximations), at any desired periods between one hour or more (concerning the gravity waves propagation) until 10s (the highly dispersed mode). The computation of the travel times is based on the a summation of the necessary time for a tsunami to cross all the elementary blocs of a grid of bathymetry following a path between the source and receiver at a given period. In addition the source dimensions and the focal mechanism are taken into account to adjust the minimum travel time to the different possible points of emission of the source. A possible application of this tool is to forecast the arrival time of late arrivals of tsunami waves that could produce the resonnance of some bays and sites at higher frequencies than the gravity mode. The theoretical arrival times are compared to the observed ones and to the results obtained by TTT (P. Wessel, 2009) and the ones obtained by numerical simulations. References: Wessel, P. (2009). Analysis of oberved and predicted tsunami travel times for the Pacic and Indian oceans. Pure Appl. Geophys., 166:301-324.

  12. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  13. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  14. Computer simulations of equilibrium magnetization and microstructure in magnetic fluids

    NASA Astrophysics Data System (ADS)

    Rosa, A. P.; Abade, G. C.; Cunha, F. R.

    2017-09-01

    In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.

  15. Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Hoadley, Sherwood Tiffany

    1991-01-01

    The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.

  16. The use of inexpensive computer-based scanning survey technology to perform medical practice satisfaction surveys.

    PubMed

    Shumaker, L; Fetterolf, D E; Suhrie, J

    1998-01-01

    The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.

  17. TORC3: Token-ring clearing heuristic for currency circulation

    NASA Astrophysics Data System (ADS)

    Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael

    2012-10-01

    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.

  18. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  19. Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.

    1977-01-01

    Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.

  20. Comparison of Implicit Schemes for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1995-01-01

    For a computational flow simulation tool to be useful in a design environment, it must be very robust and efficient. To develop such a tool for incompressible flow applications, a number of different implicit schemes are compared for several two-dimensional flow problems in the current study. The schemes include Point-Jacobi relaxation, Gauss-Seidel line relaxation, incomplete lower-upper decomposition, and the generalized minimum residual method preconditioned with each of the three other schemes. The efficiency of the schemes is measured in terms of the computing time required to obtain a steady-state solution for the laminar flow over a backward-facing step, the flow over a NACA 4412 airfoil, and the flow over a three-element airfoil using overset grids. The flow solver used in the study is the INS2D code that solves the incompressible Navier-Stokes equations using the method of artificial compressibility and upwind differencing of the convective terms. The results show that the generalized minimum residual method preconditioned with the incomplete lower-upper factorization outperforms all other methods by at least a factor of 2.

  1. Method and apparatus for determining and utilizing a time-expanded decision network

    NASA Technical Reports Server (NTRS)

    de Weck, Olivier (Inventor); Silver, Matthew (Inventor)

    2012-01-01

    A method, apparatus and computer program for determining and utilizing a time-expanded decision network is presented. A set of potential system configurations is defined. Next, switching costs are quantified to create a "static network" that captures the difficulty of switching among these configurations. A time-expanded decision network is provided by expanding the static network in time, including chance and decision nodes. Minimum cost paths through the network are evaluated under plausible operating scenarios. The set of initial design configurations are iteratively modified to exploit high-leverage switches and the process is repeated to convergence. Time-expanded decision networks are applicable, but not limited to, the design of systems, products, services and contracts.

  2. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  3. Sequence-dependent rotation axis changes and interaction torque use in overarm throwing.

    PubMed

    Hansen, Clint; Rezzoug, Nasser; Gorce, Philippe; Venture, Gentiane; Isableu, Brice

    2016-01-01

    We examined the role of rotation axes during an overarm throwing task. Participants performed such task and were asked to throw a ball at maximal velocity at a target. The purpose of this study was to examine whether the minimum inertia axis would be exploited during the throwing phases, a time when internal-external rotations of the shoulder are particularly important. A motion capture system was used to evaluate the performance and to compute the potential axes of rotation (minimum inertia axis, shoulder-centre of mass axis and the shoulder-elbow axis). More specifically, we investigated whether a velocity-dependent change in rotational axes can be observed in the different throwing phases and whether the control obeys the principle of minimum inertia resistance. Our results showed that the limbs' rotational axis mainly coincides with the minimum inertia axis during the cocking phase and with the shoulder-elbow axis during the acceleration phase. Besides these rotation axes changes, the use of interaction torque is also sequence-dependent. The sequence-dependent rotation axes changes associated with the use of interaction torque during the acceleration phase could be a key factor in the production of hand velocity at ball release.

  4. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  5. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  6. Simulating the Gradually Deteriorating Performance of an RTG

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry

    2008-01-01

    Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.

  7. Experimentally validated computational modeling of organic binder burnout from green ceramic compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.

    The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less

  8. A neural computational model for animal's time-to-collision estimation.

    PubMed

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  9. Subscale Development of Advanced ABM Graphite/Epoxy Composite Structure

    DTIC Science & Technology

    1978-01-01

    laminate analysis computer code (Reference 5). eie output of this code yields lamina stresses and strains, equivalent elastic and shear modulii for the...was not accounted for. Therefore the net effect was that the analysis tended to yield conservative results. For design purposes, this conservative...extracted using a Soxhlet Extraction apparatus, recycling the solvent af least 4 to 10 times every hour for a minimum of 6 hours. (4) All samples are

  10. Discrete Methods and their Applications

    DTIC Science & Technology

    1993-02-03

    problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute

  11. Efficient Symbolic Task Planning for Multiple Mobile Robots

    DTIC Science & Technology

    2016-12-13

    Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of

  12. Assessing Airflow Sensitivity to Healthy and Diseased Lung Conditions in a Computational Fluid Dynamics Model Validated In Vitro.

    PubMed

    Sul, Bora; Oppito, Zachary; Jayasekera, Shehan; Vanger, Brian; Zeller, Amy; Morris, Michael; Ruppert, Kai; Altes, Talissa; Rakesh, Vineet; Day, Steven; Robinson, Risa; Reifman, Jaques; Wallqvist, Anders

    2018-05-01

    Computational models are useful for understanding respiratory physiology. Crucial to such models are the boundary conditions specifying the flow conditions at truncated airway branches (terminal flow rates). However, most studies make assumptions about these values, which are difficult to obtain in vivo. We developed a computational fluid dynamics (CFD) model of airflows for steady expiration to investigate how terminal flows affect airflow patterns in respiratory airways. First, we measured in vitro airflow patterns in a physical airway model, using particle image velocimetry (PIV). The measured and computed airflow patterns agreed well, validating our CFD model. Next, we used the lobar flow fractions from a healthy or chronic obstructive pulmonary disease (COPD) subject as constraints to derive different terminal flow rates (i.e., three healthy and one COPD) and computed the corresponding airflow patterns in the same geometry. To assess airflow sensitivity to the boundary conditions, we used the correlation coefficient of the shape similarity (R) and the root-mean-square of the velocity magnitude difference (Drms) between two velocity contours. Airflow patterns in the central airways were similar across healthy conditions (minimum R, 0.80) despite variations in terminal flow rates but markedly different for COPD (minimum R, 0.26; maximum Drms, ten times that of healthy cases). In contrast, those in the upper airway were similar for all cases. Our findings quantify how variability in terminal and lobar flows contributes to airflow patterns in respiratory airways. They highlight the importance of using lobar flow fractions to examine physiologically relevant airflow characteristics.

  13. Scaling laws for oxygen transport across the space-filling system of respiratory membranes in the human lung

    NASA Astrophysics Data System (ADS)

    Hou, Chen

    Space-filling fractal surfaces play a fundamental role in how organisms function at various levels and in how structure determines function at different levels. In this thesis, we develop a quantitative theory of oxygen transport to and across the surface of the highly branched, space-filling system of alveoli, the fundamental gas exchange unit (acinar airways), in the human lung. Oxygen transport in the acinar airways is by diffusion, and we treat the two steps---diffusion through the branched airways, and transfer across the alveolar membranes---as a stationary diffusion-reaction problem, taking into account that there may be steep concentration gradients between the entrance and remote alveoli (screening). We develop a renormalization treatment of this screening effect and derive an analytic formula for the oxygen current across the cumulative alveolar membrane surface, modeled as a fractal, space-filling surface. The formula predicts the current from a minimum of morphological data of the acinus and appropriate values of the transport parameters, through a number of power laws (scaling laws). We find that the lung at rest operates near the borderline between partial screening and no screening; that it switches to no screening under exercise; and that the computed currents agree with measured values within experimental uncertainties. From an analysis of the computed current as a function of membrane permeability, we find that the space-filling structure of the gas exchanger is simultaneously optimal with respect to five criteria. The exchanger (i) generates a maximum oxygen current at minimum permeability; (ii) 'wastes' a minimum of surface area; (iii) maintains a minimum residence time of oxygen in the acinar airways; (iv) has a maximum fault tolerance to loss of permeability; and (v) generates a maximum current increase when switching from rest to exercise.

  14. Minimum Requirements for Accurate and Efficient Real-Time On-Chip Spike Sorting

    PubMed Central

    Navajas, Joaquin; Barsakcioglu, Deren Y.; Eftekhar, Amir; Jackson, Andrew; Constandinou, Timothy G.; Quiroga, Rodrigo Quian

    2014-01-01

    Background Extracellular recordings are performed by inserting electrodes in the brain, relaying the signals to external power-demanding devices, where spikes are detected and sorted in order to identify the firing activity of different putative neurons. A main caveat of these recordings is the necessity of wires passing through the scalp and skin in order to connect intracortical electrodes to external amplifiers. The aim of this paper is to evaluate the feasibility of an implantable platform (i.e. a chip) with the capability to wirelessly transmit the neural signals and perform real-time on-site spike sorting. New Method We computationally modelled a two-stage implementation for online, robust, and efficient spike sorting. In the first stage, spikes are detected on-chip and streamed to an external computer where mean templates are created and sent back to the chip. In the second stage, spikes are sorted in real-time through template matching. Results We evaluated this procedure using realistic simulations of extracellular recordings and describe a set of specifications that optimise performance while keeping to a minimum the signal requirements and the complexity of the calculations. Comparison with Existing Methods A key bottleneck for the development of long-term BMIs is to find an inexpensive method for real-time spike sorting. Here, we simulated a solution to this problem that uses both offline and online processing of the data. Conclusions Hardware implementations of this method therefore enable low-power long-term wireless transmission of multiple site extracellular recordings, with application to wireless BMIs or closed-loop stimulation designs. PMID:24769170

  15. Man/computer communication in a space environment

    NASA Technical Reports Server (NTRS)

    Hodges, B. C.; Montoya, G.

    1973-01-01

    The present work reports on a study of the technology required to advance the state of the art in man/machine communications. The study involved the development and demonstration of both hardware and software to effectively implement man/computer interactive channels of communication. While tactile and visual man/computer communications equipment are standard methods of interaction with machines, man's speech is a natural media for inquiry and control. As part of this study, a word recognition unit was developed capable of recognizing a minimum of one hundred different words or sentences in any one of the currently used conversational languages. The study has proven that efficiency in communication between man and computer can be achieved when the vocabulary to be used is structured in a manner compatible with the rigid communication requirements of the machine while at the same time responsive to the informational needs of the man.

  16. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  17. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  18. A comparison of time-optimal interception trajectories for the F-8 and F-15

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Pettengill, James B.

    1990-01-01

    The simulation results of a real time control algorithm for onboard computation of time-optimal intercept trajectories for the F-8 and F-15 aircraft are given. Due to the inherent aerodynamic and propulsion differences in the aircraft, there are major differences in their optimal trajectories. The significant difference in the two aircrafts are their flight envelopes. The F-8's optimal cruise velocity is thrust limited, while the F-15's optimal cruise velocity is at the intersection of the Mach and dynamic pressure constraint boundaries. This inherent difference necessitated the development of a proportional thrust controller for use as the F-15 approaches it's optimal cruise energy. Documented here is the application of singular perturbation theory to the trajectory optimization problem, along with a summary of the control algorithms. Numerical results for the two aircraft are compared to illustrate the performance of the minimum time algorithm, and to compute the resulting flight paths.

  19. Placement of clock gates in time-of-flight optoelectronic circuits

    NASA Astrophysics Data System (ADS)

    Feehrer, John R.; Jordan, Harry F.

    1995-12-01

    Time-of-flight synchronized optoelectronic circuits capitalize on the highly controllable delays of optical waveguides. Circuits have no latches; synchronization is achieved by adjustment of the lengths of waveguides that connect circuit elements. Clock gating and pulse stretching are used to restore timing and power. A functional circuit requires that every feedback loop contain at least one clock gate to prevent cumulative timing drift and power loss. A designer specifies an ideal circuit, which contains no or very few clock gates. To make the circuit functional, we must identify locations in which to place clock gates. Because clock gates are expensive, add area, and increase delay, a minimal set of locations is desired. We cast this problem in graph-theoretical form as the minimum feedback edge set problem and solve it by using an adaptation of an algorithm proposed in 1966 [IEEE Trans. Circuit Theory CT-13, 399 (1966)]. We discuss a computer-aided-design implementation of the algorithm that reduces computational complexity and demonstrate it on a set of circuits.

  20. Numerical simulation of mushrooms during freezing using the FEM and an enthalpy: Kirchhoff formulation

    NASA Astrophysics Data System (ADS)

    Santos, M. V.; Lespinard, A. R.

    2011-12-01

    The shelf life of mushrooms is very limited since they are susceptible to physical and microbial attack; therefore they are usually blanched and immediately frozen for commercial purposes. The aim of this work was to develop a numerical model using the finite element technique to predict freezing times of mushrooms considering the actual shape of the product. The original heat transfer equation was reformulated using a combined enthalpy-Kirchhoff formulation, therefore an own computational program using Matlab 6.5 (MathWorks, Natick, Massachusetts) was developed, considering the difficulties encountered when simulating this non-linear problem in commercial softwares. Digital images were used to generate the irregular contour and the domain discretization. The numerical predictions agreed with the experimental time-temperature curves during freezing of mushrooms (maximum absolute error <3.2°C) obtaining accurate results and minimum computer processing times. The codes were then applied to determine required processing times for different operating conditions (external fluid temperatures and surface heat transfer coefficients).

  1. Using Multiple Endmember Spectral Mixture Analysis of MODIS Data for Computing the Fire Potential Index in Southern California

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Roberts, D. A.

    2007-12-01

    The Fire Potential Index (FPI) is currently the only operationally used wildfire susceptibility index in the United States that incorporates remote sensing data in addition to meteorological information. Its remote sensing component utilizes relative greenness derived from a NDVI time series as a proxy for computing the ratio of live to dead vegetation. This study investigates the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) as a more direct and physically reasonable way of computing the live ratio and applying it for the computation of the FPI. A time series of 16-day reflectance composites of Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to perform the analysis. Endmember selection for green vegetation (GV), non- photosynthetic vegetation (NPV) and soil was performed in two stages. First, a subset of suitable endmembers was selected from an extensive library of reference and image spectra for each class using Endmember Average Root Mean Square Error (EAR), Minimum Average Spectral Angle (MASA) and a count-based technique. Second, the most appropriate endmembers for the specific data set were selected from the subset by running a series of 2-endmember models on representative images and choosing the ones that modeled the majority of pixels. The final set of endmembers was used for running MESMA on southern California MODIS composites from 2000 to 2006. 3- and 4-endmember models were considered. The best model was chosen on a per-pixel basis according to the minimum root mean square error of the models at each level of complexity. Endmember fractions were normalized by the shade endmember to generate realistic fractions of GV and NPV. In order to validate the MESMA-derived GV fractions they were compared against live ratio estimates from RG. A significant spatial and temporal relationship between both measures was found, indicating that GV fraction has the potential to substitute RG in computing the FPI. To further test this hypothesis the live ratio estimates obtained from MESMA were used to compute daily FPI maps for southern California from 2001 to 2006. A validation with historical wildfire data from the MODIS Active Fire product was carried out over the same time period using logistic regression. Initial results show that MESMA-derived GV fraction can be used successfully for generating FPI maps of southern California.

  2. A Microworld Approach to the Formalization of Musical Knowledge.

    ERIC Educational Resources Information Center

    Honing, Henkjan

    1993-01-01

    Discusses the importance of applying computational modeling and artificial intelligence techniques to music cognition and computer music research. Recommends three uses of microworlds to trim computational theories to their bare minimum, allowing for better and easier comparison. (CFR)

  3. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  4. On actuator placement for robust time-optimal control of uncertain flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Liu, Qiang

    1992-01-01

    The problem of computing open-loop, on-off jet firing logic for flexible spacecraft in the face of plant modeling uncertainty is investigated. The primary control objective is to achieve a fast maneuvering time with a minimum of structural vibrations during and/or after a maneuver. This paper is also concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated. A three-mass-spring model of flexible spacecraft with a rigid-body mode and two flexible modes is used to illustrate the concept.

  5. Aerospace Ground Equipment for model 4080 sequence programmer. A standard computer terminal is adapted to provide convenient operator to device interface

    NASA Technical Reports Server (NTRS)

    Nissley, L. E.

    1979-01-01

    The Aerospace Ground Equipment (AGE) provides an interface between a human operator and a complete spaceborne sequence timing device with a memory storage program. The AGE provides a means for composing, editing, syntax checking, and storing timing device programs. The AGE is implemented with a standard Hewlett-Packard 2649A terminal system and a minimum of special hardware. The terminal's dual tape interface is used to store timing device programs and to read in special AGE operating system software. To compose a new program for the timing device the keyboard is used to fill in a form displayed on the screen.

  6. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  7. Dynamic Control of Adsorption Sensitivity for Photo-EMF-Based Ammonia Gas Sensors Using a Wireless Network

    PubMed Central

    Vashpanov, Yuriy; Choo, Hyunseung; Kim, Dongsoo Stephen

    2011-01-01

    This paper proposes an adsorption sensitivity control method that uses a wireless network and illumination light intensity in a photo-electromagnetic field (EMF)-based gas sensor for measurements in real time of a wide range of ammonia concentrations. The minimum measurement error for a range of ammonia concentration from 3 to 800 ppm occurs when the gas concentration magnitude corresponds with the optimal intensity of the illumination light. A simulation with LabView-engineered modules for automatic control of a new intelligent computer system was conducted to improve measurement precision over a wide range of gas concentrations. This gas sensor computer system with wireless network technology could be useful in the chemical industry for automatic detection and measurement of hazardous ammonia gas levels in real time. PMID:22346680

  8. The development of hurricane Inez, 1966, as shown by satellite nighttime radiometric and daytime television coverage

    NASA Technical Reports Server (NTRS)

    Allison, L. J.

    1972-01-01

    A complete documentation of Numbus 2 High Resolution infrared Radiometer data and ESSA-1 and 3 television photographs is presented for the life-time of Hurricane Inez, 1966. Ten computer produced radiation charts were analyzed in order to delineate the three dimensional cloud structure during the formative, mature and dissipating stages of this tropical cyclone. Time sections were drawn throughout the storm's life cycle to relate the warm core development and upper level outflow of the storm with their respective cloud canopies, as shown by the radiation data. Aerial reconnaissance weather reports, radar photographs and conventional weather analyses were used to complement the satellite data. A computer program was utilized to accept Nimbus 2 HRIR equivalent blackbody temperatures within historical maximum and minimum sea surface temperature limits over the tropical Atlantic Ocean.

  9. Flow convergence caused by a salinity minimum in a tidal channel

    USGS Publications Warehouse

    Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey

    2006-01-01

    Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.

  10. Minimum average 7-day, 10-year flows in the Hudson River basin, New York, with release-flow data on Rondout and Ashokan reservoirs

    USGS Publications Warehouse

    Archer, Roger J.

    1978-01-01

    Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.

  11. Suicide and meteorological factors in São Paulo, Brazil, 1996-2011: a time series analysis.

    PubMed

    Bando, Daniel H; Teng, Chei T; Volpe, Fernando M; Masi, Eduardo de; Pereira, Luiz A; Braga, Alfésio L

    2017-01-01

    Considering the scarcity of reports from intertropical latitudes and the Southern Hemisphere, we aimed to examine the association between meteorological factors and suicide in São Paulo. Weekly suicide records stratified by sex were gathered. Weekly averages for minimum, mean, and maximum temperature (°C), insolation (hours), irradiation (MJ/m2), relative humidity (%), atmospheric pressure (mmHg), and rainfall (mm) were computed. The time structures of explanatory variables were modeled by polynomial distributed lag applied to the generalized additive model. The model controlled for long-term trends and selected meteorological factors. The total number of suicides was 6,600 (5,073 for men), an average of 6.7 suicides per week (8.7 for men and 2.0 for women). For overall suicides and among men, effects were predominantly acute and statistically significant only at lag 0. Weekly average minimum temperature had the greatest effect on suicide; there was a 2.28% increase (95%CI 0.90-3.69) in total suicides and a 2.37% increase (95%CI 0.82-3.96) among male suicides with each 1 °C increase. This study suggests that an increase in weekly average minimum temperature has a short-term effect on suicide in São Paulo.

  12. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  13. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  14. Adapting Teaching Strategies To Encompass New Technologies.

    ERIC Educational Resources Information Center

    Oravec, Jo Ann

    2001-01-01

    The explosion of special-purpose computing devices--Internet appliances, handheld computers, wireless Internet, networked household appliances--challenges business educators attempting to provide computer literacy education. At a minimum, they should address connectivity, expanded applications, and social and public policy implications of these…

  15. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  16. Preliminary Studies of Interacting Binaries From NURO Observations : V963 Cygni and GSC 1419 0091

    NASA Astrophysics Data System (ADS)

    Samec, R. G.; Jones, S. M.; Scott, T.; Branning, J.; Miller, J.; Faulkner, D. R.; Hawkins, N. C.

    2005-12-01

    We present preliminary analyses of V963 and V965 Cygni based on observations taken at the National Undergraduate Research Observatory (NURO). Our CCD observations were taken 07-12 March 2005 and 19-25 July 2004 by DRF,RGS, and NCH with the Lowell Observatory 31-inch reflector. Standard UBVRI filters were used. Preliminary light curve analyses and updated periodicity studies are presented for these variables. V963 Cyg (GSC 2656 1995,α (2000) = 19h 44m 04.92s, δ (2000) = +31 41 50.17) is a detached binary discovered by Wachmann (Ast Abh Ham St VI, #1, 1961). The eclipse depths are nearly equal, 0.78 and 0.67 magnitudes in in V in the primary and secondary eclipses, respectively, causing observers to MISTAKINGLY classify it as an Algol-type system. Thus the two stars are similar in temperature and the period has to be DOUBLED. The curves appear fairlysymmetrical with a depressed section following the primary eclipse in R and I about 0.2 phase units wide. In BVRI, 100 to 130 observations were taken along with 75 in U. We determined three new times of minimum light, two secondary eclipses, HJD Min II = 2453207.76857±0.00029d and 2453211.9540±0.0032d, and one primary eclipse HJD Min I = 2453209.86073±0.00095d. A corrected period and an improved ephemeris was computed using available times of minimum light: HJD Min I = 2453209.8616(±0.0011)d + 1.39466792(±0.00000019)*E. GSC 1419 0091 (Brh V132) [α (2000) = 10h 11m 59.152s,δ (2000) = +16 52 30.28] is an overcontact binary discovered by Klaus Bernhard (BAV, http://www.var-mo.de/star/brh_v132.htm). We took approximately 60-65 observations in each of B,V,R, and I. We determined four new times of minimum light: HJD Min I = 2453437.8293(±0.0003) and 2453441.8291(±0.0019), and HJD Min II = 2453437.6973(±0.0012) and 2453442.76317(±0.0005). We computed an improved ephemeris from all available times of minimum and low light: HJD Min I = 2452754.4733(±0.0030)d + 0.2667251*E(±0.0000011). The light curves show shallow eclipse amplitudes of 0.46 and 0.43 mags in B and V, respectively, and a time of constant light in the secondary eclipse of 31 m. We wish to thank the NURO for their allocation of observing time, as well as NASA and the American Astronomical Society for their support in paying for travel and publication expenses.

  17. Automatic measurement; Mesures automatiques (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringeard, C.

    1974-11-28

    By its ability to link-up operations sequentially and memorize the data collected, the computer can introduce a statistical approach in the evaluation of a result. To benefit fully from the advantages of automation, a special effort was made to reduce the programming time to a minimum and to simplify link-ups between the existing system and instruments from different sources. The practical solution of the test laboratory of the C.E.A. Centralized Administration Groupe (GEC) is given.

  18. Thruput Analysis of AFLC CYBER 73 Computers.

    DTIC Science & Technology

    1981-12-01

    Ref 2:14). This decision permitted a fast conversion effort with minimum programmer/analyst experience (Ref 34). Recently, as the conversion effort...converted (Ref 1:2). 2 . i i i II I i4 Moreover, many of the large data-file and machine-time- consuming systems were not included in the earlier...by LMT personnel revealed that during certain periods i.e., 0000-0800, the machine is normally reserved for the large 3 4 resource- consuming programs

  19. Implementation of GAMMON - An efficient load balancing strategy for a local computer system

    NASA Technical Reports Server (NTRS)

    Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.

    1989-01-01

    GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.

  20. ADDER CIRCUIT

    DOEpatents

    Jacobsohn, D.H.; Merrill, L.C.

    1959-01-20

    An improved parallel addition unit is described which is especially adapted for use in electronic digital computers and characterized by propagation of the carry signal through each of a plurality of denominationally ordered stages within a minimum time interval. In its broadest aspects, the invention incorporates a fast multistage parallel digital adder including a plurality of adder circuits, carry-propagation circuit means in all but the most significant digit stage, means for conditioning each carry-propagation circuit during the time period in which information is placed into the adder circuits, and means coupling carry-generation portions of thc adder circuit to the carry propagating means.

  1. Film annotation system for a space experiment

    NASA Technical Reports Server (NTRS)

    Browne, W. R.; Johnson, S. S.

    1989-01-01

    This microprocessor system was designed to control and annotate a Nikon 35 mm camera for the purpose of obtaining photographs and data at predefined time intervals. The single STD BUSS interface card was designed in such a way as to allow it to be used in either a stand alone application with minimum features or installed in a STD BUSS computer allowing for maximum features. This control system also allows the exposure of twenty eight alpha/numeric characters across the bottom of each photograph. The data contains such information as camera identification, frame count, user defined text, and time to .01 second.

  2. BVRI Photometric Study of the High Mass Ratio, Detached, Pre-contact W UMa Binary GQ Cancri

    NASA Astrophysics Data System (ADS)

    Samec, R. G.; Olson, A.; Caton, D.; Faulkner, D. R.

    2017-12-01

    CCD BVRcIc light curves of GQ Cancri were observed in April 2013 using the SARA North 0.9-meter Telescope at Kitt Peak National Observatory in Arizona in remote mode. It is a high-amplitude (V 0.9 magnitude) K0±V type eclipsing binary (T1 5250 K) with a photometrically-determined mass ratio of M2 / M1 = 0.80. Its spectral color type classifies it as a pre-contact W UMa Binary (PCWB). The Wilson-Devinney Mode 2 solutions show that the system has a detached binary configuration with fill-outs of 94% and 98% for the primary and secondary component, respectively. As expected, the light curve is asymmetric due to spot activity. Three times of minimum light were calculated, for two primary eclipses and one secondary eclipse, from our present observations. In total, some 26 times of minimum light covering nearly 20 years of observation were used to determine linear and quadratic ephemerides. It is noted that the light curve solution remained in a detached state for every iteration of the computer runs. The components are very similar with a computed temperature difference of only 4 K, and the flux of the primary component accounts for 53±55% of the system's light in B, V, Rc, and Ic. A 12-degree radius high latitude white spot (faculae) was iterated on the primary component.

  3. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  4. 20 CFR 229.45 - Employee benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...

  5. 20 CFR 229.45 - Employee benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...

  6. 20 CFR 229.45 - Employee benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...

  7. 20 CFR 229.45 - Employee benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...

  8. 20 CFR 229.45 - Employee benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...

  9. Pyrrole multimers and pyrrole-acetylene hydrogen bonded complexes studied in N2 and para-H2 matrixes using matrix isolation infrared spectroscopy and ab initio computations

    NASA Astrophysics Data System (ADS)

    Sarkar, Shubhra; Ramanathan, N.; Gopi, R.; Sundararajan, K.

    2017-12-01

    Hydrogen bonded interaction of pyrrole multimer and acetylene-pyrrole complexes were studied in N2 and p-H2 matrixes. DFT computations showed T-shaped geometry for the pyrrole dimer and cyclic complex for the trimer and tetramer were the most stable structures, stabilized by Nsbnd H⋯π interactions. The experimental vibrational wavenumbers observed in N2 and p-H2 matrixes for the pyrrole multimers were correlated with the computed wavenumbers. Computations performed at MP2/aug-cc-pVDZ level of theory showed that C2H2 and C4H5N forms 1:1 hydrogen-bonded complexes stabilized by Csbnd H⋯π interaction (Complex A), Nsbnd H⋯π interaction (Complex B) and π⋯π interaction (Complex C), where the former complex is the global minimum and latter two complexes were the first and second local minima, respectively. Experimentally, 1:1 C2H2sbnd C4H5N complexes A (global minimum) and B (first local minimum) were identified from the shifts in the Nsbnd H stretching, Nsbnd H bending, Csbnd H bending region of pyrrole and Csbnd H asymmetric stretching and bending region of C2H2 in N2 and p-H2 matrixes. Computations were also performed for the higher complexes and found two minima corresponding to the 1:2 C2H2sbnd C4H5N and three minima for the 2:1 C2H2sbnd C4H5N complexes. Experimentally the global minimum 1:2 and 2:1 C2H2sbnd C4H5N complexes were identified in N2 and p-H2 matrixes.

  10. Tchebichef moment transform on image dithering for mobile applications

    NASA Astrophysics Data System (ADS)

    Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah

    2012-04-01

    Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.

  11. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  12. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  13. Radiative Transfer and Satellite Remote Sensing of Cirrus Clouds Using FIRE-2-IFO Data

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Under the support of the NASA grant, we have developed a new geometric-optics model (GOM2) for the calculation of the single-scattering and polarization properties for arbitrarily oriented hexagonal ice crystals. From comparisons with the results computed by the finite difference time domain (FDTD) method, we show that the novel geometric-optics can be applied to the computation of the extinction cross section and single-scattering albedo for ice crystals with size parameters along the minimum dimension as small as approximately 6. We demonstrate that the present model converges to the conventional ray tracing method for large size parameters and produces single-scattering results close to those computed by the FDTD method for size parameters along the minimum dimension smaller than approximately 20. We demonstrate that neither the conventional geometric optics method nor the Lorenz-Mie theory can be used to approximate the scattering, absorption, and polarization features for hexagonal ice crystals with size parameters from approximately 5 to 20. On the satellite remote sensing algorithm development and validation, we have developed a numerical scheme to identify multilayer cirrus cloud systems using AVHRR data. We have applied this scheme to the satellite data collected over the FIRE-2-IFO area during nine overpasses within seven observation dates. Determination of the threshold values used in the detection scheme are based on statistical analyses of these satellite data.

  14. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  15. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  16. 25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...

  17. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  18. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  20. One dimensional P wave velocity structure of the crust beneath west Java and accurate hypocentre locations from local earthquake inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supardiyono; Santosa, Bagus Jaya; Physics Department, Faculty of Mathematics and Natural Sciences, Sepuluh Nopember Institute of Technology, Surabaya

    A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquakemore » locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.« less

  1. Quasi-elastic light scattering: Signal storage, correlation, and spectrum analysis under control of an 8-bit microprocessor

    NASA Astrophysics Data System (ADS)

    Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter

    1987-03-01

    The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.

  2. 29 CFR 541.604 - Minimum guarantee plus extras.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DEFINING AND DELIMITING THE EXEMPTIONS FOR EXECUTIVE, ADMINISTRATIVE, PROFESSIONAL, COMPUTER AND OUTSIDE SALES EMPLOYEES Salary Requirements § 541.604 Minimum guarantee plus extras. (a) An employer may provide... commission on sales. An exempt employee also may receive a percentage of the sales or profits of the employer...

  3. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  4. AEGIS: a wildfire prevention and management information system

    NASA Astrophysics Data System (ADS)

    Kalabokidis, Kostas; Ager, Alan; Finney, Mark; Athanasis, Nikos; Palaiologou, Palaiologos; Vasilakos, Christos

    2016-03-01

    We describe a Web-GIS wildfire prevention and management platform (AEGIS) developed as an integrated and easy-to-use decision support tool to manage wildland fire hazards in Greece (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing online access to information that is essential for wildfire management. The system uses a number of spatial and non-spatial data sources to support key system functionalities. Land use/land cover maps were produced by combining field inventory data with high-resolution multispectral satellite images (RapidEye). These data support wildfire simulation tools that allow the users to examine potential fire behavior and hazard with the Minimum Travel Time fire spread algorithm. End-users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations, i.e., single-fire propagation, point-scale calculation of potential fire behavior, and burn probability analysis, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANNs) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps are used to generate integrated output map of fire hazard prediction. The system also incorporates weather information obtained from remote automatic weather stations and weather forecast maps. The system and associated computation algorithms leverage parallel processing techniques (i.e., High Performance Computing and Cloud Computing) that ensure computational power required for real-time application. All AEGIS functionalities are accessible to authorized end-users through a web-based graphical user interface. An innovative smartphone application, AEGIS App, also provides mobile access to the web-based version of the system.

  5. Functional characteristics of the calcium modulated proteins seen from an evolutionary perspective

    NASA Technical Reports Server (NTRS)

    Kretsinger, R. H.; Nakayama, S.; Moncrief, N. D.

    1991-01-01

    We have constructed dendrograms relating 173 EF-hand proteins of known amino acid sequence. We aligned all of these proteins by their EF-hand domains, omitting interdomain regions. Initial dendrograms were computed by minimum mutation distance methods. Using these as starting points, we determined the best dendrogram by the method of maximum parsimony, scored by minimum mutation distance. We identified 14 distinct subfamilies as well as 6 unique proteins that are perhaps the sole representatives of other subfamilies. This information is given in tabular form. Within subfamilies one can easily align interdomain regions. The resulting dendrograms are very similar to those computed using domains only. Dendrograms constructed using pairs of domains show general congruence. However, there are enough exceptions to caution against an overly simple scheme in which one pair of gene duplications leads from one domain precurser to a four domain prototype from which all other forms evolved. The ability to bind calcium was lost and acquired several times during evolution. The distribution of introns does not conform to the dendrogram based on amino acid sequences. The rates of evolution appear to be much slower within subfamilies, especially within calmodulin, than those prior to the definition of subfamily.

  6. Simulation of the bimetal cast in the case of milling rolls

    NASA Astrophysics Data System (ADS)

    Mihut, G.; Popa, E.

    2015-06-01

    In the paper it is proposed, in main, to obtain of a model of numerical simulation, valid general and applicable the whole peculiars cases of bimetal casting, model with which help can be studied through the computer, the optimization possibility of flowing working condition of liquid alloy of the distribution of temperatures field, of the liquid phase and contraction during the solidification, with the minimum price (necessary reimbursement of the software and calculus equipment) in very short time etc.

  7. Modal analysis of circular Bragg fibers with arbitrary index profiles

    NASA Astrophysics Data System (ADS)

    Horikis, Theodoros P.; Kath, William L.

    2006-12-01

    A finite-difference approach based upon the immersed interface method is used to analyze the mode structure of Bragg fibers with arbitrary index profiles. The method allows general propagation constants and eigenmodes to be calculated to a high degree of accuracy, while computation times are kept to a minimum by exploiting sparse matrix algebra. The method is well suited to handle complicated structures comprised of a large number of thin layers with high-index contrast and simultaneously determines multiple eigenmodes without modification.

  8. Channel and feature selection in multifunction myoelectric control.

    PubMed

    Khushaba, Rami N; Al-Jumaily, Adel

    2007-01-01

    Real time controlling devices based on myoelectric singles (MES) is one of the challenging research problems. This paper presents a new approach to reduce the computational cost of real time systems driven by Myoelectric signals (MES) (a.k.a Electromyography--EMG). The new approach evaluates the significance of feature/channel selection on MES pattern recognition. Particle Swarm Optimization (PSO), an evolutionary computational technique, is employed to search the feature/channel space for important subsets. These important subsets will be evaluated using a multilayer perceptron trained with back propagation neural network (BPNN). Practical results acquired from tests done on six subjects' datasets of MES signals measured in a noninvasive manner using surface electrodes are presented. It is proved that minimum error rates can be achieved by considering the correct combination of features/channels, thus providing a feasible system for practical implementation purpose for rehabilitation of patients.

  9. Groundwater-level trends in the U.S. glacial aquifer system, 1964-2013

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Dudley, Robert W.; Nielsen, Martha G.; Renard, Benjamin; Qi, Sharon L.

    2017-01-01

    The glacial aquifer system in the United States is a major source of water supply but previous work on historical groundwater trends across the system is lacking. Trends in annual minimum, mean, and maximum groundwater levels for 205 monitoring wells were analyzed across three regions of the system (East, Central, West Central) for four time periods: 1964-2013, 1974-2013, 1984-2013, and 1994-2013. Trends were computed separately for wells in the glacial aquifer system with low potential for human influence on groundwater levels and ones with high potential influence from activities such as groundwater pumping. Generally there were more wells with significantly increasing groundwater levels (levels closer to ground surface) than wells with significantly decreasing levels. The highest numbers of significant increases for all four time periods were with annual minimum and/or mean levels. There were many more wells with significant increases from 1964 to 2013 than from more recent periods, consistent with low precipitation in the 1960s. Overall there were low numbers of wells with significantly decreasing trends regardless of time period considered; the highest number of these were generally for annual minimum groundwater levels at wells with likely human influence. There were substantial differences in the number of wells with significant groundwater-level trends over time, depending on whether the historical time series are assumed to be independent, have short-term persistence, or have long-term persistence. Mean annual groundwater levels have significant lag-one-year autocorrelation at 26.0% of wells in the East region, 65.4% of wells in the Central region, and 100% of wells in the West Central region. Annual precipitation across the glacial aquifer system, on the other hand, has significant autocorrelation at only 5.5% of stations, about the percentage expected due to chance.

  10. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  11. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  12. Three-dimensional laser microvision.

    PubMed

    Shimotahira, H; Iizuka, K; Chu, S C; Wah, C; Costen, F; Yoshikuni, Y

    2001-04-10

    A three-dimensional (3-D) optical imaging system offering high resolution in all three dimensions, requiring minimum manipulation and capable of real-time operation, is presented. The system derives its capabilities from use of the superstructure grating laser source in the implementation of a laser step frequency radar for depth information acquisition. A synthetic aperture radar technique was also used to further enhance its lateral resolution as well as extend the depth of focus. High-speed operation was made possible by a dual computer system consisting of a host and a remote microcomputer supported by a dual-channel Small Computer System Interface parallel data transfer system. The system is capable of operating near real time. The 3-D display of a tunneling diode, a microwave integrated circuit, and a see-through image taken by the system operating near real time are included. The depth resolution is 40 mum; lateral resolution with a synthetic aperture approach is a fraction of a micrometer and that without it is approximately 10 mum.

  13. Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders.

    PubMed

    Balasubramanian, P; Yamashita, S

    2016-01-01

    This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology.

  14. Computer-aided design of high-frequency transistor amplifiers.

    NASA Technical Reports Server (NTRS)

    Hsieh, C.-C.; Chan, S.-P.

    1972-01-01

    A systematic step-by-step computer-aided procedure for designing high-frequency transistor amplifiers is described. The technique makes it possible to determine the optimum source impedance which gives a minimum noise figure.

  15. Doppler measurements of the ionosphere on the occasion of the Apollo-Soyuz test project. Part 1: Computer simulation of ionospheric-induced Doppler shifts

    NASA Technical Reports Server (NTRS)

    Grossi, M. D.; Gay, R. H.

    1975-01-01

    A computer simulation of the ionospheric experiment of the Apollo-Soyuz Test Project (ASTP) was performed. ASTP is the first example of USA/USSR cooperation in space and is scheduled for summer 1975. The experiment consists of performing dual-frequency Doppler measurements (at 162 and 324 MHz) between the Apollo Command Service Module (CSM) and the ASTP Docking Module (DM), both orbiting at 221-km height and at a relative distance of 300 km. The computer simulation showed that, with the Doppler measurement resolution of approximately 3 mHz provided by the instrumentation (in 10-sec integration time), ionospheric-induced Doppler shifts will be measurable accurately at all times, with some rare exceptions occurring when the radio path crosses regions of minimum ionospheric density. The computer simulation evaluated the ability of the experiment to measure changes of columnar electron content between CSM and DM (from which horizontal gradients of electron density at 221-km height can be obtained) and to measure variations in DM-to-ground columnar content (from which an averaged columnar content and the electron density at the DM can be deduced, under some simplifying assumptions).

  16. Analytical model of contamination during the drying of cylinders of jamonable muscle

    NASA Astrophysics Data System (ADS)

    Montoya Arroyave, Isabel

    2014-05-01

    For a cylinder of jamonable muscle of radius R and length much greater than R; considering that the internal resistance to the transfer of water is much greater than the external and that the internal resistance is one certain function of the distance to the axis; the distribution of the punctual moisture in the jamonable cylinder is analytically computed in terms of the Bessel's functions. During the process of drying and salted the jamonable cylinder is sensitive to contaminate with bacterium and protozoa that come from the environment. An analytical model of contamination is presents using the diffusion equation with sources and sinks, which is solve by the method of the Laplace transform, the Bromwich integral, the residue theorem and some special functions like Bessel and Heun. The critical times intervals of drying and salted are computed in order to obtain the minimum possible contamination. It is assumed that both external moisture and contaminants decrease exponentially with time. Contaminants profiles are plotted and discussed some possible techniques of contaminants detection. All computations are executed using Computer Algebra, specifically Maple. It is said that the results are important for the food industry and it is suggested some future research lines.

  17. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  18. Minimum time and fuel flight profiles for an F-15 airplane with a Highly Integrated Digital Electronic Control (HIDEC) system

    NASA Technical Reports Server (NTRS)

    Haering, E. A., Jr.; Burcham, F. W., Jr.

    1984-01-01

    A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.

  19. Establishing Proficiency Standards for High School Graduation.

    ERIC Educational Resources Information Center

    Herron, Marshall D.

    The Oregon State Board of Education has rejected the use of cut-off scores on a proficiency test to establish minimum performance standards for high school graduation. Instead, each school district is required to specify--by local board adoption--minimum competencies in reading, writing, listening, speaking, analyzing, and computing. These…

  20. 20 CFR 229.48 - Family maximum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a) Family... month on one person's earnings record is limited. This limited amount is called the family maximum. The...

  1. Computer Model for Sizing Rapid Transit Tunnel Diameters

    DOT National Transportation Integrated Search

    1976-01-01

    A computer program was developed to assist the determination of minimum tunnel diameters for electrified rapid transit systems. Inputs include vehicle shape, walkway location, clearances, and track geometrics. The program written in FORTRAN IV calcul...

  2. Productivity associated with visual status of computer users.

    PubMed

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  3. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  4. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  5. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  7. Energy management of three-dimensional minimum-time intercept. [for aircraft flight optimization

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Cliff, E. M.; Visser, H. G.

    1985-01-01

    A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot (bank angle and load factor) consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. Some numerical results comparing open-loop optimal and approximate feedback solutions are presented.

  8. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  9. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  10. Predictive minimum description length principle approach to inferring gene regulatory networks.

    PubMed

    Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping

    2011-01-01

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.

  11. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  12. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  13. A Computational Model for Predicting Gas Breakdown

    NASA Astrophysics Data System (ADS)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  14. Majorana-Based Fermionic Quantum Computation.

    PubMed

    O'Brien, T E; Rożek, P; Akhmerov, A R

    2018-06-01

    Because Majorana zero modes store quantum information nonlocally, they are protected from noise, and have been proposed as a building block for a quantum computer. We show how to use the same protection from noise to implement universal fermionic quantum computation. Our architecture requires only two Majorana modes to encode a fermionic quantum degree of freedom, compared to alternative implementations which require a minimum of four Majorana modes for a spin quantum degree of freedom. The fermionic degrees of freedom support both unitary coupled cluster variational quantum eigensolver and quantum phase estimation algorithms, proposed for quantum chemistry simulations. Because we avoid the Jordan-Wigner transformation, our scheme has a lower overhead for implementing both of these algorithms, allowing for simulation of the Trotterized Hubbard Hamiltonian in O(1) time per unitary step. We finally demonstrate magic state distillation in our fermionic architecture, giving a universal set of topologically protected fermionic quantum gates.

  15. The economics of data acquisition computers for ST and MST radars

    NASA Technical Reports Server (NTRS)

    Watkins, B. J.

    1983-01-01

    Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.

  16. Majorana-Based Fermionic Quantum Computation

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; RoŻek, P.; Akhmerov, A. R.

    2018-06-01

    Because Majorana zero modes store quantum information nonlocally, they are protected from noise, and have been proposed as a building block for a quantum computer. We show how to use the same protection from noise to implement universal fermionic quantum computation. Our architecture requires only two Majorana modes to encode a fermionic quantum degree of freedom, compared to alternative implementations which require a minimum of four Majorana modes for a spin quantum degree of freedom. The fermionic degrees of freedom support both unitary coupled cluster variational quantum eigensolver and quantum phase estimation algorithms, proposed for quantum chemistry simulations. Because we avoid the Jordan-Wigner transformation, our scheme has a lower overhead for implementing both of these algorithms, allowing for simulation of the Trotterized Hubbard Hamiltonian in O (1 ) time per unitary step. We finally demonstrate magic state distillation in our fermionic architecture, giving a universal set of topologically protected fermionic quantum gates.

  17. Compliance with WHO IYCF Indicators and Dietary Intake Adequacy in a Sample of Malaysian Infants Aged 6–23 Months

    PubMed Central

    Khor, Geok Lin; Tan, Sue Yee; Tan, Kok Leong; Chan, Pauline S.; Amarra, Maria Sofia V.

    2016-01-01

    Background: The 2010 World Health Organisation (WHO) Infant and Young Child Feeding (IYCF) indicators are useful for monitoring feeding practices. Methods: A total sample of 300 subjects aged 6 to 23 months was recruited from urban suburbs of Kuala Lumpur and Putrajaya. Compliance with each IYCF indicator was computed according to WHO recommendations. Dietary intake based on two-day weighed food records was obtained from a sub-group (N = 119) of the total sample. The mean adequacy ratio (MAR) value was computed as an overall measure of dietary intake adequacy. Contributions of core IYCF indicators to MAR were determined by multinomial logistic regression. Results: Generally, the subjects showed high compliance for (i) timely introduction of complementary foods at 6 to 8 months (97.9%); (ii) minimum meal frequency among non-breastfed children aged 6 to 23 months (95.2%); (iii) consumption of iron-rich foods at 6 to 23 months (92.3%); and minimum dietary diversity (78.0%). While relatively high proportions achieved the recommended intake levels for protein (87.4%) and iron (71.4%), lower proportions attained the recommendations for calcium (56.3%) and energy (56.3%). The intake of micronutrients was generally poor. The minimum dietary diversity had the greatest contribution to MAR (95% CI: 3.09, 39.87) (p = 0.000) among the core IYCF indicators. Conclusion: Malaysian urban infants and toddlers showed moderate to high compliance with WHO IYCF indicators. The robustness of the analytical approach in this study in quantifying contributions of IYCF indicators to MAR should be further investigated. PMID:27916932

  18. Compliance with WHO IYCF Indicators and Dietary Intake Adequacy in a Sample of Malaysian Infants Aged 6-23 Months.

    PubMed

    Khor, Geok Lin; Tan, Sue Yee; Tan, Kok Leong; Chan, Pauline S; Amarra, Maria Sofia V

    2016-12-01

    The 2010 World Health Organisation (WHO) Infant and Young Child Feeding (IYCF) indicators are useful for monitoring feeding practices. A total sample of 300 subjects aged 6 to 23 months was recruited from urban suburbs of Kuala Lumpur and Putrajaya. Compliance with each IYCF indicator was computed according to WHO recommendations. Dietary intake based on two-day weighed food records was obtained from a sub-group ( N = 119) of the total sample. The mean adequacy ratio (MAR) value was computed as an overall measure of dietary intake adequacy. Contributions of core IYCF indicators to MAR were determined by multinomial logistic regression. Generally, the subjects showed high compliance for (i) timely introduction of complementary foods at 6 to 8 months (97.9%); (ii) minimum meal frequency among non-breastfed children aged 6 to 23 months (95.2%); (iii) consumption of iron-rich foods at 6 to 23 months (92.3%); and minimum dietary diversity (78.0%). While relatively high proportions achieved the recommended intake levels for protein (87.4%) and iron (71.4%), lower proportions attained the recommendations for calcium (56.3%) and energy (56.3%). The intake of micronutrients was generally poor. The minimum dietary diversity had the greatest contribution to MAR (95% CI: 3.09, 39.87) ( p = 0.000) among the core IYCF indicators. Malaysian urban infants and toddlers showed moderate to high compliance with WHO IYCF indicators. The robustness of the analytical approach in this study in quantifying contributions of IYCF indicators to MAR should be further investigated.

  19. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  20. Application of quadratic optimization to supersonic inlet control.

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1972-01-01

    This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.

  1. Feedback laws for fuel minimization for transport aircraft

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Gracey, C.

    1984-01-01

    The Theoretical Mechanics Branch has as one of its long-range goals to work toward solving real-time trajectory optimization problems on board an aircraft. This is a generic problem that has application to all aspects of aviation from general aviation through commercial to military. Overall interest is in the generic problem, but specific problems to achieve concrete results are examined. The problem is to develop control laws that generate approximately optimal trajectories with respect to some criteria such as minimum time, minimum fuel, or some combination of the two. These laws must be simple enough to be implemented on a computer that is flown on board an aircraft, which implies a major simplification from the two point boundary value problem generated by a standard trajectory optimization problem. In addition, the control laws allow for changes in end conditions during the flight, and changes in weather along a planned flight path. Therefore, a feedback control law that generates commands based on the current state rather than a precomputed open-loop control law is desired. This requirement, along with the need for order reduction, argues for the application of singular perturbation techniques.

  2. 20 CFR 229.53 - Reduction for social security benefits on employee's wage record.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...

  3. 20 CFR 229.49 - Adjustment of benefits under family maximum for change in family group.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... for change in family group. 229.49 Section 229.49 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.49 Adjustment of benefits under family maximum for change in family group. (a...

  4. 20 CFR 229.53 - Reduction for social security benefits on employee's wage record.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...

  5. 20 CFR 229.53 - Reduction for social security benefits on employee's wage record.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...

  6. 20 CFR 229.53 - Reduction for social security benefits on employee's wage record.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...

  7. 20 CFR 229.53 - Reduction for social security benefits on employee's wage record.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...

  8. Exploiting Identical Generators in Unit Commitment

    DOE PAGES

    Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul

    2017-12-14

    Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less

  9. Exploiting Identical Generators in Unit Commitment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul

    Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less

  10. Optimizing noise control strategy in a forging workshop.

    PubMed

    Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal

    2014-01-01

    In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.

  11. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems.

    PubMed

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V

    2016-01-01

    Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.

  12. How to Write a Reproducible Paper

    NASA Astrophysics Data System (ADS)

    Irving, D. B.

    2016-12-01

    The geosciences have undergone a computational revolution in recent decades, to the point where almost all modern research relies heavily on software and code. Despite this profound change in the research methods employed by geoscientists, the reporting of computational results has changed very little in academic journals. This lag has led to something of a reproducibility crisis, whereby it is impossible to replicate and verify most of today's published computational results. While it is tempting to decry the slow response of journals and funding agencies in the face of this crisis, there are very few examples of reproducible research upon which to base new communication standards. In an attempt to address this deficiency, this presentation will describe a procedure for reporting computational results that was employed in a recent Journal of Climate paper. The procedure was developed to be consistent with recommended computational best practices and seeks to minimize the time burden on authors, which has been identified as the most important barrier to publishing code. It should provide a starting point for geoscientists looking to publish reproducible research, and could be adopted by journals as a formal minimum communication standard.

  13. Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.

    NASA Astrophysics Data System (ADS)

    Hur, Sun Hae

    The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.

  14. 17 CFR 1.18 - Records for and relating to financial reporting and monthly computation by futures commission...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... financial reporting and monthly computation by futures commission merchants and introducing brokers. 1.18... UNDER THE COMMODITY EXCHANGE ACT Minimum Financial and Related Reporting Requirements § 1.18 Records for and relating to financial reporting and monthly computation by futures commission merchants and...

  15. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) The random number generator shall be linked to the computer system and shall directly relay the... information shall be generated by the computer system. (2) This documentation shall be restricted to... to the computer system shall be adequately restricted (i.e., passwords are changed at least quarterly...

  16. 29 CFR 783.43 - Computation of seaman's minimum wage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS... STANDARDS ACT TO EMPLOYEES EMPLOYED AS SEAMEN Computation of Wages and Hours § 783.43 Computation of seaman... all hours on duty in such period at the hourly rate prescribed for employees newly covered by the Act...

  17. 17 CFR 1.18 - Records for and relating to financial reporting and monthly computation by futures commission...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... financial reporting and monthly computation by futures commission merchants and introducing brokers. 1.18... UNDER THE COMMODITY EXCHANGE ACT Minimum Financial and Related Reporting Requirements § 1.18 Records for and relating to financial reporting and monthly computation by futures commission merchants and...

  18. The use of computers in a materials science laboratory

    NASA Technical Reports Server (NTRS)

    Neville, J. P.

    1990-01-01

    The objective is to make available a method of easily recording the microstructure of a sample by means of a computer. The method requires a minimum investment and little or no instruction on the operation of a computer. An outline of the setup involving a black and white TV camera, a digitizer control box, a metallurgical microscope and a computer screen, printer, and keyboard is shown.

  19. V571 Lyr is a Multiple System (Abstract)

    NASA Astrophysics Data System (ADS)

    Billings, G.

    2016-12-01

    (Abstract only) V571 Lyr (GSC 3116-1047) was discovered by the ROTSE survey to be an EA-type eclipsing binary with 1.25-day period. Primary and secondary eclipses are very similar, with depth V = 0.58 magnitude. In 2000, the then-active AAVSO "EB Team" started observing it, to refine the period estimate. A few eclipses were readily found, and a revised period computed. Subsequent eclipses diverged from the revised linear ephemeris by more than the expected amount of error, so observations were continued. Now, more than 100 time-of-minimum observations, over 15 years, clearly show that V571 Lyr is a triple system, with a third-body orbital period of 5.013 ± 0.008 years, and eccentricity of 0.74 ± 0.03. Our orbit fit also yields a period for the close pair, of 1.252 596 66(6) days. After removing the third-body light-time effect, the eclipse-time residuals still show larger than expected scatter, and possibly non-randomness, perhaps due to significant starspots and/or additional bodies in the system. The color of the system is B-V = 0.52 ± 0.01, corresponding to spectral type F7V, and we obtained a spectrum that we classify as F7V ± 2. The mass function computed from the fitted third-body orbit yields a minimum mass of 1.0 ± 0.1 Msolar, corresponding to a spectral range of F9V to G5V for the third star. We assume the two stars of the close pair are very similar, so the remaining light in eclipses (59%) is consistent with total eclipses and 3rd light from a star slightly dimmer than each of the pair.

  20. Evenly spaced Detrended Fluctuation Analysis: Selecting the number of points for the diffusion plot

    NASA Astrophysics Data System (ADS)

    Liddy, Joshua J.; Haddad, Jeffrey M.

    2018-02-01

    Detrended Fluctuation Analysis (DFA) has become a widely-used tool to examine the correlation structure of a time series and provided insights into neuromuscular health and disease states. As the popularity of utilizing DFA in the human behavioral sciences has grown, understanding its limitations and how to properly determine parameters is becoming increasingly important. DFA examines the correlation structure of variability in a time series by computing α, the slope of the log SD- log n diffusion plot. When using the traditional DFA algorithm, the timescales, n, are often selected as a set of integers between a minimum and maximum length based on the number of data points in the time series. This produces non-uniformly distributed values of n in logarithmic scale, which influences the estimation of α due to a disproportionate weighting of the long-timescale regions of the diffusion plot. Recently, the evenly spaced DFA and evenly spaced average DFA algorithms were introduced. Both algorithms compute α by selecting k points for the diffusion plot based on the minimum and maximum timescales of interest and improve the consistency of α estimates for simulated fractional Gaussian noise and fractional Brownian motion time series. Two issues that remain unaddressed are (1) how to select k and (2) whether the evenly-spaced DFA algorithms show similar benefits when assessing human behavioral data. We manipulated k and examined its effects on the accuracy, consistency, and confidence limits of α in simulated and experimental time series. We demonstrate that the accuracy and consistency of α are relatively unaffected by the selection of k. However, the confidence limits of α narrow as k increases, dramatically reducing measurement uncertainty for single trials. We provide guidelines for selecting k and discuss potential uses of the evenly spaced DFA algorithms when assessing human behavioral data.

  1. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  2. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  3. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  4. A Computer Analysis of Library Postcards. (CALP)

    ERIC Educational Resources Information Center

    Stevens, Norman D.

    1974-01-01

    A description of a sophisticated application of computer techniques to the analysis of a collection of picture postcards of library buildings in an attempt to establish the minimum architectural requirements needed to distinguish one style of library building from another. (Author)

  5. Computer program optimizes design of nuclear radiation shields

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1971-01-01

    Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.

  6. 20 CFR 229.54 - Reduction for social security benefit paid to employee on another person's earnings record.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...

  7. 20 CFR 229.54 - Reduction for social security benefit paid to employee on another person's earnings record.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...

  8. 20 CFR 229.54 - Reduction for social security benefit paid to employee on another person's earnings record.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for social security benefit paid to... BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.54 Reduction for social security benefit paid to employee on...

  9. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 1: Development of a concise stratospheric model

    NASA Technical Reports Server (NTRS)

    Rundel, R. D.; Butler, D. M.; Stolarski, R. S.

    1977-01-01

    A concise model has been developed to analyze uncertainties in stratospheric perturbations, yet uses a minimum of computer time and is complete enough to represent the results of more complex models. The steady state model applies iteration to achieve coupling between interacting species. The species are determined from diffusion equations with appropriate sources and sinks. Diurnal effects due to chlorine nitrate formation are accounted for by analytic approximation. The model has been used to evaluate steady state perturbations due to injections of chlorine and NO(X).

  10. Radon detection system, design, test and performance

    NASA Astrophysics Data System (ADS)

    Balcázar, M.; Chávez, A.; Piña-Villalpando, G.; Navarrete, M.

    1999-02-01

    A portable radon detection system (α-Inin) has been designed and constructed for using it in adverse environmental conditions where humidity, temperature and chemical vaporous are present. The minimum integration time is in periods of 15 min during 41 days. A 12 V battery and a photovoltaic module allow the α-Inin autonomy in field measurements. Data is collected by means of a laptop computer where data processing and α-Inin programming are carried out. α-Inin performance was simultaneously tested in a controlled radon chamber, together with a commercial α-Meter.

  11. Detection of biogenic CO production above vascular cell cultures using a near-room-temperature QC-DFB laser

    NASA Technical Reports Server (NTRS)

    Kosterev, A. A.; Tittel, F. K.; Durante, W.; Allen, M.; Kohler, R.; Gmachl, C.; Capasso, F.; Sivco, D. L.; Cho, A. Y.

    2002-01-01

    We report the first application of pulsed, near-room-temperature quantum cascade laser technology to the continuous detection of biogenic CO production rates above viable cultures of vascular smooth muscle cells. A computer-controlled sequence of measurements over a 9-h period was obtained, resulting in a minimum detectable CO production of 20 ppb in a 1-m optical path above a standard cell-culture flask. Data-processing procedures for real-time monitoring of both biogenic and ambient atmospheric CO concentrations are described.

  12. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  13. A comparison of the wavelet and short-time fourier transforms for Doppler spectral analysis.

    PubMed

    Zhang, Yufeng; Guo, Zhenyu; Wang, Weilian; He, Side; Lee, Ting; Loew, Murray

    2003-09-01

    Doppler spectrum analysis provides a non-invasive means to measure blood flow velocity and to diagnose arterial occlusive disease. The time-frequency representation of the Doppler blood flow signal is normally computed by using the short-time Fourier transform (STFT). This transform requires stationarity of the signal during a finite time interval, and thus imposes some constraints on the representation estimate. In addition, the STFT has a fixed time-frequency window, making it inaccurate to analyze signals having relatively wide bandwidths that change rapidly with time. In the present study, wavelet transform (WT), having a flexible time-frequency window, was used to investigate its advantages and limitations for the analysis of the Doppler blood flow signal. Representations computed using the WT with a modified Morlet wavelet were investigated and compared with the theoretical representation and those computed using the STFT with a Gaussian window. The time and frequency resolutions of these two approaches were compared. Three indices, the normalized root-mean-squared errors of the minimum, the maximum and the mean frequency waveforms, were used to evaluate the performance of the WT. Results showed that the WT can not only be used as an alternative signal processing tool to the STFT for Doppler blood flow signals, but can also generate a time-frequency representation with better resolution than the STFT. In addition, the WT method can provide both satisfactory mean frequencies and maximum frequencies. This technique is expected to be useful for the analysis of Doppler blood flow signals to quantify arterial stenoses.

  14. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  15. Developing Digital Immigrants' Computer Literacy: The Case of Unemployed Women

    ERIC Educational Resources Information Center

    Ktoridou, Despo; Eteokleous-Grigoriou, Nikleia

    2011-01-01

    Purpose: The purpose of this study is to evaluate the effectiveness of a 40-hour computer course for beginners provided to a group of unemployed women learners with no/minimum computer literacy skills who can be characterized as digital immigrants. The aim of the study is to identify participants' perceptions and experiences regarding technology,…

  16. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  17. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography

    PubMed Central

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547

  18. A Proposal for Production Data Collection on a Hybrid Production Line in Cooperation with MES

    NASA Astrophysics Data System (ADS)

    Znamenák, Jaroslav; Križanová, Gabriela; Iringová, Miriam; Važan, Pavel

    2016-12-01

    Due to the increasing competitive environment in the manufacturing sector, many industries have the need for a computer integrated engineering management system. The Manufacturing Execution System (MES) is a computer system designed for product manufacturing with high quality, low cost and minimum lead time. MES is a type of middleware providing the required information for the optimization of production from launching of a product order to its completion. There are many studies dealing with the advantages of the use of MES, but little research was conducted on how to implement MES effectively. A solution to this issue are KPIs. KPIs are important to many strategic philosophies or practices for improving the production process. This paper describes a proposal for analyzing manufacturing system parameters with the use of KPIs.

  19. Effects of pressure drop and superficial velocity on the bubbling fluidized bed incinerator.

    PubMed

    Wang, Feng-Jehng; Chen, Suming; Lei, Perng-Kwei; Wu, Chung-Hsing

    2007-12-01

    Since performance and operational conditions, such as superficial velocity, pressure drop, particles viodage, and terminal velocity, are difficult to measure on an incinerator, this study used computational fluid dynamics (CFD) to determine numerical solutions. The effects of pressure drop and superficial velocity on a bubbling fluidized bed incinerator (BFBI) were evaluated. Analytical results indicated that simulation models were able to effectively predict the relationship between superficial velocity and pressure drop over bed height in the BFBI. Second, the models in BFBI were simplified to simulate scale-up beds without excessive computation time. Moreover, simulation and experimental results showed that minimum fluidization velocity of the BFBI must be controlled in at 0.188-3.684 m/s and pressure drop was mainly caused by bed particles.

  20. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  1. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    PubMed

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  2. Exact solutions for the collaborative pickup and delivery problem.

    PubMed

    Gansterer, Margaretha; Hartl, Richard F; Salzmann, Philipp E H

    2018-01-01

    In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.

  3. Efficient volume computation for three-dimensional hexahedral cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dukowicz, J.K.

    1988-02-01

    Currently, algorithms for computing the volume of hexahedral cells with ''ruled'' surfaces require a minimum of 122 FLOPs (floating point operations) per cell. A new algorithm is described which reduces the operation count to 57 FLOPs per cell. copyright 1988 Academic Press, Inc.

  4. Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed

    NASA Astrophysics Data System (ADS)

    Carrasco, V. M. S.; Vaquero, J. M.

    2016-11-01

    We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.

  5. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1976-01-01

    An example sensitivity study is presented to demonstrate how a digital autopilot designer could make a decision on minimum sampling rate for computer specification. It consists of comparing the simulated step response of an existing analog autopilot and its associated aircraft dynamics to the digital version operating at various sampling frequencies and specifying a sampling frequency that results in an acceptable change in relative stability. In general, the zero order hold introduces phase lag which will increase overshoot and settling time. It should be noted that this solution is for substituting a digital autopilot for a continuous autopilot. A complete redesign could result in results which more closely resemble the continuous results or which conform better to original design goals.

  6. Feedforward/feedback control synthesis for performance and robustness

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Liu, Qiang

    1990-01-01

    Both feedforward and feedback control approaches for uncertain dynamical systems are investigated. The control design objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant modeling uncertainty. Preshapong of an ideal, time-optimal control input using a 'tapped-delay' filter is shown to provide a rapid maneuver with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. The proposed feedforward/feedback control approach is robust for a certain class of uncertain dynamical systems, since the control input command computed for a given desired output does not depend on the plant parameters.

  7. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  8. GPU-accelerated low-latency real-time searches for gravitational waves from compact binary coalescence

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Du, Zhihui; Chung, Shin Kee; Hooper, Shaun; Blair, David; Wen, Linqing

    2012-12-01

    We present a graphics processing unit (GPU)-accelerated time-domain low-latency algorithm to search for gravitational waves (GWs) from coalescing binaries of compact objects based on the summed parallel infinite impulse response (SPIIR) filtering technique. The aim is to facilitate fast detection of GWs with a minimum delay to allow prompt electromagnetic follow-up observations. To maximize the GPU acceleration, we apply an efficient batched parallel computing model that significantly reduces the number of synchronizations in SPIIR and optimizes the usage of the memory and hardware resource. Our code is tested on the CUDA ‘Fermi’ architecture in a GTX 480 graphics card and its performance is compared with a single core of Intel Core i7 920 (2.67 GHz). A 58-fold speedup is achieved while giving results in close agreement with the CPU implementation. Our result indicates that it is possible to conduct a full search for GWs from compact binary coalescence in real time with only one desktop computer equipped with a Fermi GPU card for the initial LIGO detectors which in the past required more than 100 CPUs.

  9. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  10. HLYWD: a program for post-processing data files to generate selected plots or time-lapse graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The program HLYWD is a post-processor of output files generated by large plasma simulation computations or of data files containing a time sequence of plasma diagnostics. It is intended to be used in a production mode for either type of application; i.e., it allows one to generate along with the graphics sequence, segments containing title, credits to those who performed the work, text to describe the graphics, and acknowledgement of funding agency. The current version is designed to generate 3D plots and allows one to select type of display (linear or semi-log scales), choice of normalization of function values formore » display purposes, viewing perspective, and an option to allow continuous rotations of surfaces. This program was developed with the intention of being relatively easy to use, reasonably flexible, and requiring a minimum investment of the user's time. It uses the TV80 library of graphics software and ORDERLIB system software on the CDC 7600 at the National Magnetic Fusion Energy Computing Center at Lawrence Livermore Laboratory in California.« less

  11. 42 CFR 84.83 - Timers; elapsed time indicators; remaining service life indicators; minimum requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Timers; elapsed time indicators; remaining service life indicators; minimum requirements. 84.83 Section 84.83 Public Health PUBLIC HEALTH SERVICE... indicators; remaining service life indicators; minimum requirements. (a) Elapsed time indicators shall be...

  12. 42 CFR 84.83 - Timers; elapsed time indicators; remaining service life indicators; minimum requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Timers; elapsed time indicators; remaining service life indicators; minimum requirements. 84.83 Section 84.83 Public Health PUBLIC HEALTH SERVICE... indicators; remaining service life indicators; minimum requirements. (a) Elapsed time indicators shall be...

  13. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.

    PubMed

    Fayyaz S, S Kiavash; Liu, Xiaoyue Cathy; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.

  14. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis

    PubMed Central

    Fayyaz S., S. Kiavash; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. PMID:28981544

  15. Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System

    NASA Astrophysics Data System (ADS)

    Isik, Can

    An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the dissertation are the description of the computer simulation of Pilot within the hierarchy of IMAS control and the simulated experiments that demonstrate the theoretical work.

  16. Numerical optimization techniques for bound circulation distribution for minimum induced drag of Nonplanar wings: Computer program documentation

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.; Ku, T. J.

    1981-01-01

    A two dimensional advanced panel far-field potential flow model of the undistorted, interacting wakes of multiple lifting surfaces was developed which allows the determination of the spanwise bound circulation distribution required for minimum induced drag. This model was implemented in a FORTRAN computer program, the use of which is documented in this report. The nonplanar wakes are broken up into variable sized, flat panels, as chosen by the user. The wake vortex sheet strength is assumed to vary linearly over each of these panels, resulting in a quadratic variation of bound circulation. Panels are infinite in the streamwise direction. The theory is briefly summarized herein; sample results are given for multiple, nonplanar, lifting surfaces, and the use of the computer program is detailed in the appendixes.

  17. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  18. Tetrahedron Formation Control

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.

    2003-01-01

    Spacecraft flying in tetrahedron formations are excellent instrument platforms for electromagnetic and plasma studies. A minimum of four spacecraft - to establish a volume - is required to study some of the key regions of a planetary magnetic field. The usefulness of the measurements recorded is strongly affected by the tetrahedron orbital evolution. This paper considers the preliminary development of a general optimization procedure for tetrahedron formation control. The maneuvers are assumed to be impulsive and a multi-stage optimization method is employed. The stages include targeting to a fixed tetrahedron orientation, rotating and translating the tetrahedron and/or varying the initial and final times. The number of impulsive maneuvers citn also be varied. As the impulse locations and times change, new arcs are computed using a differential corrections scheme that varies the impulse magnitudes and directions. The result is a continuous trajectory with velocity discontinuities. The velocity discontinuities are then used to formulate the cost function. Direct optimization techniques are employed. The procedure is applied to the Magnetospheric Multiscale Mission (MMS) to compute preliminary formation control fuel requirements.

  19. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    PubMed

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Real time target allocation in cooperative unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kudleppanavar, Ganesh

    The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.

  1. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  2. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  3. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs

    PubMed Central

    Erdakov, Ivan Nikolaevich; Taha, Mohamed~Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-01-01

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth–Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness (Ra) prediction of one component in computer numerical control (CNC) turning over minimal machining time (Tm) and at prime machining costs (C). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra, Tm, and C, in relation to cutting speed, vc, depth of cut, ap, and feed per revolution, fr. For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values vc, ap, and fr. The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, Tm = 0.358 min/cm3, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed vc = 250 m/min, cutting depth ap = 1.0 mm, and feed per revolution fr = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness. PMID:29772670

  4. Einstein-Home search for periodic gravitational waves in early S5 LIGO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, B. P.; Abbott, R.; Adhikari, R.

    This paper reports on an all-sky search for periodic gravitational waves from sources such as deformed isolated rapidly spinning neutron stars. The analysis uses 840 hours of data from 66 days of the fifth LIGO science run (S5). The data were searched for quasimonochromatic waves with frequencies f in the range from 50 to 1500 Hz, with a linear frequency drift f (measured at the solar system barycenter) in the range -f/{tau}

  5. Lower Ionosphere Sensitivity to Solar X-ray Flares Over a Complete Solar Cycle Evaluated From VLF Signal Measurements

    NASA Astrophysics Data System (ADS)

    Macotela, Edith L.; Raulin, Jean-Pierre; Manninen, Jyrki; Correia, Emília; Turunen, Tauno; Magalhães, Antonio

    2017-12-01

    The daytime lower ionosphere behaves as a solar X-ray flare detector, which can be monitored using very low frequency (VLF) radio waves that propagate inside the Earth-ionosphere waveguide. In this paper, we infer the lower ionosphere sensitivity variation over a complete solar cycle by using the minimum X-ray fluence (FXmin) necessary to produce a disturbance of the quiescent ionospheric conductivity. FXmin is the photon energy flux integrated over the time interval from the start of a solar X-ray flare to the beginning of the ionospheric disturbance recorded as amplitude deviation of the VLF signal. FXmin is computed for ionospheric disturbances that occurred in the time interval of December-January from 2007 to 2016 (solar cycle 24). The computation of FXmin uses the X-ray flux in the wavelength band below 0.2 nm and the amplitude of VLF signals transmitted from France (HWU), Turkey (TBB), and U.S. (NAA), which were recorded in Brazil, Finland, and Peru. The main result of this study is that the long-term variation of FXmin is correlated with the level of solar activity, having FXmin values in the range (1 - 12) × 10-7 J/m2. Our result suggests that FXmin is anticorrelated with the lower ionosphere sensitivity, confirming that the long-term variation of the ionospheric sensitivity is anticorrelated with the level of solar activity. This result is important to identify the minimum X-ray fluence that an external source of ionization must overcome in order to produce a measurable ionospheric disturbance during daytime.

  6. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-file Systems

    PubMed Central

    Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Naik, Saraswathi V

    2016-01-01

    ABSTRACT Background: Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. Study design: This is an experimental, in vitro study comparing the two groups. Materials and methods: A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. Results: A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Conclusion: Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49. PMID:27274155

  7. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  8. Modification of Prim’s algorithm on complete broadcasting graph

    NASA Astrophysics Data System (ADS)

    Dairina; Arif, Salmawaty; Munzir, Said; Halfiani, Vera; Ramli, Marwan

    2017-09-01

    Broadcasting is an information dissemination from one object to another object through communication between two objects in a network. Broadcasting for n objects can be solved by n - 1 communications and minimum time unit defined by ⌈2log n⌉ In this paper, weighted graph broadcasting is considered. The minimum weight of a complete broadcasting graph will be determined. Broadcasting graph is said to be complete if every vertex is connected. Thus to determine the minimum weight of complete broadcasting graph is equivalent to determine the minimum spanning tree of a complete graph. The Kruskal’s and Prim’s algorithm will be used to determine the minimum weight of a complete broadcasting graph regardless the minimum time unit ⌈2log n⌉ and modified Prim’s algorithm for the problems of the minimum time unit ⌈2log n⌉ is done. As an example case, here, the training of trainer problem is solved using these algorithms.

  9. Investigation of effective impact parameters in electron-ion temperature relaxation via Particle-Particle Coulombic molecular dynamics

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2017-09-01

    Aiming at a high simulation accuracy, a Particle-Particle (PP) Coulombic molecular dynamics model is implemented to study the electron-ion temperature relaxation. In this model, the Coulomb's law is directly applied in a bounded system with two cutoffs at both short and long length scales. By increasing the range between the two cutoffs, it is found that the relaxation rate deviates from the BPS theory and approaches the LS theory and the GMS theory. Also, the effective minimum and maximum impact parameters (bmin* and bmax*) are obtained. For the simulated plasma condition, bmin* is about 6.352 times smaller than the Landau length (bC), and bmax* is about 2 times larger than the Debye length (λD), where bC and λD are used in the LS theory. Surprisingly, the effective relaxation time obtained from the PP model is very close to the LS theory and the GMS theory, even though the effective Coulomb logarithm is two times greater than the one used in the LS theory. Besides, this work shows that the PP model (commonly known as computationally expensive) is becoming practicable via GPU parallel computing techniques.

  10. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method.

    PubMed

    Leyde, Brian P; Klein, Sanford A; Nellis, Gregory F; Skye, Harrison

    2017-03-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model.

  11. Computational Methodology for Absolute Calibration Curves for Microfluidic Optical Analyses

    PubMed Central

    Chang, Chia-Pin; Nagel, David J.; Zaghloul, Mona E.

    2010-01-01

    Optical fluorescence and absorption are two of the primary techniques used for analytical microfluidics. We provide a thorough yet tractable method for computing the performance of diverse optical micro-analytical systems. Sample sizes range from nano- to many micro-liters and concentrations from nano- to milli-molar. Equations are provided to trace quantitatively the flow of the fundamental entities, namely photons and electrons, and the conversion of energy from the source, through optical components, samples and spectral-selective components, to the detectors and beyond. The equations permit facile computations of calibration curves that relate the concentrations or numbers of molecules measured to the absolute signals from the system. This methodology provides the basis for both detailed understanding and improved design of microfluidic optical analytical systems. It saves prototype turn-around time, and is much simpler and faster to use than ray tracing programs. Over two thousand spreadsheet computations were performed during this study. We found that some design variations produce higher signal levels and, for constant noise levels, lower minimum detection limits. Improvements of more than a factor of 1,000 were realized. PMID:22163573

  12. 20 CFR 226.3 - Other regulations related to this part.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES General § 226.3 Other regulations related to this... primary insurance amounts (PIA's) used in computing the employee, spouse and divorced spouse annuity rates... increased under the social security overall minimum. The creditable service and compensation used in...

  13. Management of health care expenditure by soft computing methodology

    NASA Astrophysics Data System (ADS)

    Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad

    2017-01-01

    In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.

  14. Decree-Law No. 154/88, 29 April 1988.

    PubMed

    1988-01-01

    This Decree-Law rationalizes and updates maternity and paternity leave regulations introduced in Portuguese Law No. 4/84 of 5 April 1984 and Decree-Law No. 136/85 of 3 May 1985. The following are the major changes made by the Decree-Law: 1) the establishment of a six-month waiting period as an eligibility requirement, irrespective of the professional category of the recipient; 2) retroactive computation of the eligibility period from the time of the initial payment of the social security tax associated with other welfare benefits; 3) increase in the amount of the benefit for children requiring parental assistance during an illness to 65% of an index expressing a proportion of past contributions by individual recipients; 4) establishment of minimum benefits to be granted to casual workers or to workers with low incomes. These minimum benefits equal 50% of the benefit for which full-time workers in any given occupational category are eligible. In addition, the Decree-Law provides that workers on leave of absence with pay and current recipients of unemployment benefits are not eligible to receive maternity and paternity benefits dealt with by the Decree-Law. full text

  15. Evaluation of temperature differences for paired stations of the U.S. Climate Reference Network

    USGS Publications Warehouse

    Gallo, K.P.

    2005-01-01

    Adjustments to data observed at pairs of climate stations have been recommended to remove the biases introduced by differences between the stations in time of observation, temperature instrumentatios, latitude, and elevation. A new network of climate stations, located in rural settings, permits comparisons of temperatures for several pairs of stations without two of the biases (time of observation and instrurtientation). The daily, monthly, and annual minimum, maximum, and mean temperatures were compared for five pairs of stations included in the U.S. Climate Reference Network. Significant differences were found between the paired stations in the annual minimum, maximum, and mean temperatures for all five pairs of stations. Adjustments for latitude and elevation differences contributed to greater differences in mean annual temperature for four of the five stations. Lapse rates computed from the mean annual temperature differences between station pairs differed from a constant value, whether or not latitude adjustments were made to the data. The results suggest that microclimate influences on temperatures observed at nearby (horizontally and vertically) stations are potentially much greater than influences that might be due to latitude or elevation differences between the stations. ?? 2005 American Meteorological Society.

  16. Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy

    PubMed Central

    Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.

    2004-01-01

    We studied the microsecond folding dynamics of three β hairpins (Trp zippers 1–3, TZ1–TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1–TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations. PMID:15020773

  17. Trp zipper folding kinetics by molecular dynamics and temperature-jump spectroscopy

    NASA Astrophysics Data System (ADS)

    Snow, Christopher D.; Qiu, Linlin; Du, Deguo; Gai, Feng; Hagen, Stephen J.; Pande, Vijay S.

    2004-03-01

    We studied the microsecond folding dynamics of three hairpins (Trp zippers 1-3, TZ1-TZ3) by using temperature-jump fluorescence and atomistic molecular dynamics in implicit solvent. In addition, we studied TZ2 by using time-resolved IR spectroscopy. By using distributed computing, we obtained an aggregate simulation time of 22 ms. The simulations included 150, 212, and 48 folding events at room temperature for TZ1, TZ2, and TZ3, respectively. The all-atom optimized potentials for liquid simulations (OPLSaa) potential set predicted TZ1 and TZ2 properties well; the estimated folding rates agreed with the experimentally determined folding rates and native conformations were the global potential-energy minimum. The simulations also predicted reasonable unfolding activation enthalpies. This work, directly comparing large simulated folding ensembles with multiple spectroscopic probes, revealed both the surprising predictive ability of current models as well as their shortcomings. Specifically, for TZ1-TZ3, OPLS for united atom models had a nonnative free-energy minimum, and the folding rate for OPLSaa TZ3 was sensitive to the initial conformation. Finally, we characterized the transition state; all TZs fold by means of similar, native-like transition-state conformations.

  18. Inertial Range Turbulence of Fast and Slow Solar Wind at 0.72 AU and Solar Minimum

    NASA Astrophysics Data System (ADS)

    Teodorescu, Eliza; Echim, Marius; Munteanu, Costel; Zhang, Tielong; Bruno, Roberto; Kovacs, Peter

    2015-05-01

    We investigate Venus Express observations of magnetic field fluctuations performed systematically in the solar wind at 0.72 Astronomical Units (AU), between 2007 and 2009, during the deep minimum of solar cycle 24. The power spectral densities (PSDs) of the magnetic field components have been computed for time intervals that satisfy the data integrity criteria and have been grouped according to the type of wind, fast and slow, defined for speeds larger and smaller, respectively, than 450 km s-1. The PSDs show higher levels of power for the fast wind than for the slow. The spectral slopes estimated for all PSDs in the frequency range 0.005-0.1 Hz exhibit a normal distribution. The average value of the trace of the spectral matrix is -1.60 for fast solar wind and -1.65 for slow wind. Compared to the corresponding average slopes at 1 AU, the PSDs are shallower at 0.72 AU for slow wind conditions suggesting a steepening of the solar wind spectra between Venus and Earth. No significant time variation trend is observed for the spectral behavior of both the slow and fast wind.

  19. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  20. Realization of planning design of mechanical manufacturing system by Petri net simulation model

    NASA Astrophysics Data System (ADS)

    Wu, Yanfang; Wan, Xin; Shi, Weixiang

    1991-09-01

    Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.

  1. Theoretical characterization of the minimum energy path for hydrogen atom addition to N2 - Implications for the unimolecular lifetime of HN2

    NASA Technical Reports Server (NTRS)

    Walch, Stephen P.; Duchovic, Ronald J.; Rohlfing, Celeste Mcmichael

    1989-01-01

    Results are reported from CASSCF externally contracted CI ab initio computations of the minimum-energy path for the addition of H to N2. The theoretical basis and numerical implementation of the computations are outlined, and the results are presented in extensive tables and graphs and characterized in detail. The zero-point-corrected barrier for HN2 dissociation is estimated as 8.5 kcal/mol, and the lifetime of the lowest-lying quasi-bound vibrational state of HN2 is found to be between 88 psec and 5.8 nsec (making experimental observation of this species very difficult).

  2. Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Arnold, William; Ramsey, Brian D.

    2009-01-01

    This paper establishes a relationship between the polishing process parameters and the generation of mid spatial-frequency error. The consideration of the polishing lap design to optimize the process in order to keep residual errors to a minimum and optimization of the process (speeds, stroke, etc.) and to keep the residual mid spatial-frequency error to a minimum, is also presented.

  3. Theoretical Foundation of the RelTime Method for Estimating Divergence Times from Variable Evolutionary Rates

    PubMed Central

    Tamura, Koichiro; Tao, Qiqing; Kumar, Sudhir

    2018-01-01

    Abstract RelTime estimates divergence times by relaxing the assumption of a strict molecular clock in a phylogeny. It shows excellent performance in estimating divergence times for both simulated and empirical molecular sequence data sets in which evolutionary rates varied extensively throughout the tree. RelTime is computationally efficient and scales well with increasing size of data sets. Until now, however, RelTime has not had a formal mathematical foundation. Here, we show that the basis of the RelTime approach is a relative rate framework (RRF) that combines comparisons of evolutionary rates in sister lineages with the principle of minimum rate change between evolutionary lineages and their respective descendants. We present analytical solutions for estimating relative lineage rates and divergence times under RRF. We also discuss the relationship of RRF with other approaches, including the Bayesian framework. We conclude that RelTime will be useful for phylogenies with branch lengths derived not only from molecular data, but also morphological and biochemical traits. PMID:29893954

  4. Parametric study of minimum reactor mass in energy-storage dc-to-dc converters

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1981-01-01

    Closed-form analytical solutions for the design equations of a minimum-mass reactor for a two-winding voltage-or-current step-up converter are derived. A quantitative relationship between the three parameters - minimum total reactor mass, maximum output power, and switching frequency - is extracted from these analytical solutions. The validity of the closed-form solution is verified by a numerical minimization procedure. A computer-aided design procedure using commercially available toroidal cores and magnet wires is also used to examine how the results from practical designs follow the predictions of the analytical solutions.

  5. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  6. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  7. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... area of each piece of processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the...

  8. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...

  9. 40 CFR 63.5400 - How do I measure the quantity of leather processed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... leather processed? 63.5400 Section 63.5400 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... processed or shipped leather with a computer scanning system accurate to 0.1 square feet. The computer scanning system must be initially calibrated for minimum accuracy to the manufacturer's specifications. For...

  10. MERIT: A man/computer data management and enhancement system for upper air nowcasting/forecasting in the United States. [Minimum Energy Routes using Interactive Techniques (MERIT)

    NASA Technical Reports Server (NTRS)

    Steinberg, R.

    1984-01-01

    It is suggested that the very short range forecast problem for aviation is one of data management rather than model development and the possibility of improving the aviation forecast using current technology is underlined. The MERIT concept of modeling technology, and advanced man/computer interactive data management and enhancement techniques to provide a tailored, accurate and timely forecast for aviation is outlined. The MERIT includes utilization of the Langrangian approach, extensive use of the automated aircraft report to complement the present data base and provide the most current observations; and the concept that a 2 to 12 hour forecast provided every 3 hr can meet the domestic needs of aviation instead of the present 18 and 24 hr forecast provided every 12 hr.

  11. Program For Evaluation Of Reliability Of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, N.; Janosik, L. A.; Gyekenyesi, J. P.; Powers, Lynn M.

    1996-01-01

    CARES/LIFE predicts probability of failure of monolithic ceramic component as function of service time. Assesses risk that component fractures prematurely as result of subcritical crack growth (SCG). Effect of proof testing of components prior to service also considered. Coupled to such commercially available finite-element programs as ANSYS, ABAQUS, MARC, MSC/NASTRAN, and COSMOS/M. Also retains all capabilities of previous CARES code, which includes estimation of fast-fracture component reliability and Weibull parameters from inert strength (without SCG contributing to failure) specimen data. Estimates parameters that characterize SCG from specimen data as well. Written in ANSI FORTRAN 77 to be machine-independent. Program runs on any computer in which sufficient addressable memory (at least 8MB) and FORTRAN 77 compiler available. For IBM-compatible personal computer with minimum 640K memory, limited program available (CARES/PC, COSMIC number LEW-15248).

  12. Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate

    NASA Astrophysics Data System (ADS)

    Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng

    2005-01-01

    In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.

  13. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  14. Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): Standardised Reporting for Model Reproducibility, Interoperability, and Data Sharing

    PubMed Central

    Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P

    2011-01-01

    Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496

  15. Real-time individualization of the unified model of performance.

    PubMed

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  16. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  17. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  18. Low-thrust trajectory analysis for the geosynchronous mission

    NASA Technical Reports Server (NTRS)

    Jasper, T. P.

    1973-01-01

    Methodology employed in development of a computer program designed to analyze optimal low-thrust trajectories is described, and application of the program to a Solar Electric Propulsion Stage (SEPS) geosynchronous mission is discussed. To avoid the zero inclination and eccentricity singularities which plague many small-force perturbation techniques, a special set of state variables (equinoctial) is used. Adjoint equations are derived for the minimum time problem and are also free from the singularities. Solutions to the state and adjoint equations are obtained by both orbit averaging and precision numerical integration; an evaluation of these approaches is made.

  19. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  20. An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.

    PubMed

    Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir

    2013-01-01

    DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.

  1. Fast GPU-based computation of spatial multigrid multiframe LMEM for PET.

    PubMed

    Nassiri, Moulay Ali; Carrier, Jean-François; Després, Philippe

    2015-09-01

    Significant efforts were invested during the last decade to accelerate PET list-mode reconstructions, notably with GPU devices. However, the computation time per event is still relatively long, and the list-mode efficiency on the GPU is well below the histogram-mode efficiency. Since list-mode data are not arranged in any regular pattern, costly accesses to the GPU global memory can hardly be optimized and geometrical symmetries cannot be used. To overcome obstacles that limit the acceleration of reconstruction from list-mode on the GPU, a multigrid and multiframe approach of an expectation-maximization algorithm was developed. The reconstruction process is started during data acquisition, and calculations are executed concurrently on the GPU and the CPU, while the system matrix is computed on-the-fly. A new convergence criterion also was introduced, which is computationally more efficient on the GPU. The implementation was tested on a Tesla C2050 GPU device for a Gemini GXL PET system geometry. The results show that the proposed algorithm (multigrid and multiframe list-mode expectation-maximization, MGMF-LMEM) converges to the same solution as the LMEM algorithm more than three times faster. The execution time of the MGMF-LMEM algorithm was 1.1 s per million of events on the Tesla C2050 hardware used, for a reconstructed space of 188 x 188 x 57 voxels of 2 x 2 x 3.15 mm3. For 17- and 22-mm simulated hot lesions, the MGMF-LMEM algorithm led on the first iteration to contrast recovery coefficients (CRC) of more than 75 % of the maximum CRC while achieving a minimum in the relative mean square error. Therefore, the MGMF-LMEM algorithm can be used as a one-pass method to perform real-time reconstructions for low-count acquisitions, as in list-mode gated studies. The computation time for one iteration and 60 millions of events was approximately 66 s.

  2. Influence of numerical dissipation in computing supersonic vortex-dominated flows

    NASA Technical Reports Server (NTRS)

    Kandil, O. A.; Chuang, A.

    1986-01-01

    Steady supersonic vortex-dominated flows are solved using the unsteady Euler equations for conical and three-dimensional flows around sharp- and round-edged delta wings. The computational method is a finite-volume scheme which uses a four-stage Runge-Kutta time stepping with explicit second- and fourth-order dissipation terms. The grid is generated by a modified Joukowski transformation. The steady flow solution is obtained through time-stepping with initial conditions corresponding to the freestream conditions, and the bow shock is captured as a part of the solution. The scheme is applied to flat-plate and elliptic-section wings with a leading edge sweep of 70 deg at an angle of attack of 10 deg and a freestream Mach number of 2.0. Three grid sizes of 29 x 39, 65 x 65 and 100 x 100 have been used. The results for sharp-edged wings show that they are consistent with all grid sizes and variation of the artificial viscosity coefficients. The results for round-edged wings show that separated and attached flow solutions can be obtained by varying the artificial viscosity coefficients. They also show that the solutions are independent of the way time stepping is done. Local time-stepping and global minimum time-steeping produce same solutions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Hsin, Po-Shen; Niu, Yuezhen, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r01222031@ntu.edu.tw, E-mail: yuezhenniu@gmail.com

    We investigate the entropy evolution in the early universe by computing the change of the entanglement entropy in Freedmann-Robertson-Walker quantum cosmology in the presence of particle horizon. The matter is modeled by a Chaplygin gas so as to provide a smooth interpolation between inflationary and radiation epochs, rendering the evolution of entropy from early time to late time trackable. We found that soon after the onset of the inflation, the total entanglement entropy rapidly decreases to a minimum. It then rises monotonically in the remainder of the inflation epoch as well as the radiation epoch. Our result is in qualitativemore » agreement with the area law of Ryu and Takayanagi including the logarithmic correction. We comment on the possible implication of our finding to the cosmological entropy problem.« less

  4. Reaeration capacity of the Rock River between Lake Koshkonong, Wisconsin and Rockton, Illinois

    USGS Publications Warehouse

    Grant, R. Stephen

    1978-01-01

    The reaeration capacity of the Rock River from Lake Koshkonong, Wisconsin, to Rockton, Illinois, was determined using the energy-dissipation model. The model was calibrated using data from radioactive-tracer measurements in the study reach. Reaeration coefficients (K2) were computed for the annual minimum 7-day mean discharge that occurs on the average of once in 10 years (Q7,10). A time-of-travel model was developed using river discharge, slope, and velocity data from three dye studies. The model was used to estimate traveltime for the Q7,10 for use in the energy-dissipation model. During one radiotracer study, 17 mile per hour winds apparently increased the reaeration coefficient about 40 times. (Woodard-USGS)

  5. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  6. Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models

    PubMed Central

    Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.

    2016-01-01

    IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818

  7. Value of epicardial potential maps in localizing pre-excitation sites for radiofrequency ablation. A simulation study

    NASA Astrophysics Data System (ADS)

    Hren, Rok

    1998-06-01

    Using computer simulations, we systematically investigated the limitations of an inverse solution that employs the potential distribution on the epicardial surface as an equivalent source model in localizing pre-excitation sites in Wolff-Parkinson-White syndrome. A model of the human ventricular myocardium that features an anatomically accurate geometry, an intramural rotating anisotropy and a computational implementation of the excitation process based on electrotonic interactions among cells, was used to simulate body surface potential maps (BSPMs) for 35 pre-excitation sites positioned along the atrioventricular ring. Two individualized torso models were used to account for variations in torso boundaries. Epicardial potential maps (EPMs) were computed using the L-curve inverse solution. The measure for accuracy of the localization was the distance between a position of the minimum in the inverse EPMs and the actual site of pre-excitation in the ventricular model. When the volume conductor properties and lead positions of the torso were precisely known and the measurement noise was added to the simulated BSPMs, the minimum in the inverse EPMs was at 12 ms after the onset on average within cm of the pre-excitation site. When the standard torso model was used to localize the sites of onset of the pre-excitation sequence initiated in individualized male and female torso models, the mean distance between the minimum and the pre-excitation site was cm for the male torso and cm for the female torso. The findings of our study indicate that a location of the minimum in EPMs computed using the inverse solution can offer non-invasive means for pre-interventional planning of the ablative treatment.

  8. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network.

    PubMed

    Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.

  9. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network

    PubMed Central

    Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%. PMID:29387141

  10. Combining tabular, rule-based, and procedural knowledge in computer-based guidelines for childhood immunization.

    PubMed

    Miller, P L; Frawley, S J; Sayward, F G; Yasnoff, W A; Duncan, L; Fleming, D W

    1997-06-01

    IMM/Serve is a computer program which implements the clinical guidelines for childhood immunization. IMM/Serve accepts as input a child's immunization history. It then indicates which vaccinations are due and which vaccinations should be scheduled next. The clinical guidelines for immunization are quite complex and are modified quite frequently. As a result, it is important that IMM/Serve's knowledge be represented in a format that facilitates the maintenance of that knowledge as the field evolves over time. To achieve this goal, IMM/Serve uses four representations for different parts of its knowledge base: (1) Immunization forecasting parameters that specify the minimum ages and wait-intervals for each dose are stored in tabular form. (2) The clinical logic that determines which set of forecasting parameters applies for a particular patient in each vaccine series is represented using if-then rules. (3) The temporal logic that combines dates, ages, and intervals to calculate recommended dates, is expressed procedurally. (4) The screening logic that checks each previous dose for validity is performed using a decision table that combines minimum ages and wait intervals with a small amount of clinical logic. A knowledge maintenance tool, IMM/Def, has been developed to help maintain the rule-based logic. The paper describes the design of IMM/Serve and the rationale and role of the different forms of knowledge used.

  11. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  12. The Unified Levelling Network of Sarawak and its Adjustment

    NASA Astrophysics Data System (ADS)

    Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.

    2016-09-01

    The height reference network of Sarawak has seen major improvement in over the past two decades. The most significant improvement was the establishment of extended precise leveling network of which is now able to connect all three major datum points at Pulau Lakei, Original and Bintulu. Datum by following the major accessible routes across Sarawak. This means the leveling network in Sarawak has now been inter-connected and unified. By having such a unified network leads to the possibility of having a common single least squares adjustment been performed for the first time. The least squares adjustment of this unified levelling network was attempted in order to compute the height of all Bench Marks established in the entire levelling network. The adjustment was done by using MoreFix levelling adjustment package developed at FGHT UTM. The computational procedure adopted is linear parametric adjustment by minimum constraint. Since Sarawak has three separate datums therefore three separate adjustments were implemented by utilizing datum at Pulau Lakei, Original Miri and Bintulu Datum respectively. Results of the MoreFix unified adjustment agreed very well with adjustment repeated using Starnet. Further the results were compared with solution given by Jupem and they are in good agreement as well. The difference in height analysed were within 10mm for the case of minimum constraint at Pulau Lakei datum and with much better agreement in the case of Original Miri Datum.

  13. Selection of the optimum font type and size interface for on screen continuous reading by young adults: an ergonomic approach.

    PubMed

    Banerjee, Jayeeta; Bhattacharyya, Moushum

    2011-12-01

    There is a rapid shifting of media: from printed paper to computer screens. This transition is modifying the process of how we read and understand text. The efficiency of reading is dependent on how ergonomically the visual information is presented. Font types and size characteristics have been shown to affect reading. A detailed investigation of the effect of the font type and size on reading on computer screens has been carried out by using subjective, objective and physiological evaluation methods on young adults. A group of young participants volunteered for this study. Two types of fonts were used: Serif fonts (Times New Roman, Georgia, Courier New) and Sans serif fonts (Verdana, Arial, Tahoma). All fonts were presented in 10, 12 and 14 point sizes. This study used a 6 X 3 (font type X size) design matrix. Participants read 18 passages of approximately the same length and reading level on a computer monitor. Reading time, ranking and overall mental workload were measured. Eye movements were recorded by a binocular eye movement recorder. Reading time was minimum for Courier New l4 point. The participants' ranking was highest and mental workload was least for Verdana 14 point. The pupil diameter, fixation duration and gaze duration were least for Courier New 14 point. The present study recommends using 14 point sized fonts for reading on computer screen. Courier New is recommended for fast reading while for on screen presentation Verdana is recommended. The outcome of this study will help as a guideline to all the PC users, software developers, web page designers and computer industry as a whole.

  14. Solar Wind Turbulence and Intermittency at 0.72 AU - Statistical Approach

    NASA Astrophysics Data System (ADS)

    Teodorescu, E.; Echim, M.; Munteanu, C.; Zhang, T.; Barabash, S. V.; Budnik, E.; Fedorov, A.

    2014-12-01

    Through this analysis we characterize the turbulent magnetic fluctuations by Venus Express Magnetometer, VEX-MAG in the solar wind during the last solar cycle minimum at a distance of 0.72 AU from the Sun. We analyze data recorded between 2007 and 2009 with time resolutions of 1 Hz and 32 Hz. In correlation with plasma data from the ASPERA instrument, Analyser of Space Plasma and Energetic Atoms, we identify 550 time intervals, at 1 Hz resolution, when VEX is in the solar wind and which satisfy selection criteria defined based on the amount and the continuity of the data. We identify 118 time intervals that correspond to fast solar wind. We compute the power spectral densities (PSD) for Bx, By, Bz, B, B2, B|| and B^. We perform a statistical analysis of the spectral indices computed for each of the PSD's and evidence a dependence of the spectral index on the solar wind velocity and a slight difference in power content between parallel and perpendicular components of the magnetic field. We also estimate the scale invariance of fluctuations by computing the Probability Distribution Functions (PDFs) for Bx, By, Bz, B and B2 time series and discuss the implications for intermittent turbulence. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS - UEFISCDI, project number PN-II-ID-PCE-2012-4-0418.

  15. Range indices of geomagnetic activity

    USGS Publications Warehouse

    Stuart, W.F.; Green, A.W.

    1988-01-01

    The simplest index of geomagnetic activity is the range in nT from maximum to minimum value of the field in a given time interval. The hourly range R was recommended by IAGA for use at observatories at latitudes greater than 65??, but was superceded by AE. The most used geomagnetic index K is based on the range of activity in a 3 h interval corrected for the regular daily variation. In order to take advantage of real time data processing, now available at many observatories, it is proposed to introduce a 1 h range index and also a 3 h range index. Both will be computed hourly, i.e. each will have a series of 24 per day, the 3 h values overlapping. The new data will be available as the range (R) of activity in nT and also as a logarithmic index (I) of the range. The exponent relating index to range in nT is based closely on the scale used for computing K values. The new ranges and range indices are available, from June 1987, to users in real time and can be accessed by telephone connection or computer network. Their first year of production is regarded as a trial period during which their value to the scientific and commercial communities will be assessed, together with their potential as indicators of regional and global disturbances' and in which trials will be conducted into ways of eliminating excessive bias at quiet times due to the rate of change of the daily variation field. ?? 1988.

  16. Minimum-Time Consensus-Based Approach for Power System Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di; Sun, Yannan

    2016-02-01

    This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.

  17. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...

  18. Resolving Properties of Polymers and Nanoparticle Assembly through Coarse-Grained Computational Studies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary S.

    2017-09-01

    Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less

  19. Method and computer product to increase accuracy of time-based software verification for sensor networks

    DOEpatents

    Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA

    2011-01-25

    A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.

  20. Mitigation of time-varying distortions in Nyquist-WDM systems using machine learning

    NASA Astrophysics Data System (ADS)

    Granada Torres, Jhon J.; Varughese, Siddharth; Thomas, Varghese A.; Chiuchiarelli, Andrea; Ralph, Stephen E.; Cárdenas Soto, Ana M.; Guerrero González, Neil

    2017-11-01

    We propose a machine learning-based nonsymmetrical demodulation technique relying on clustering to mitigate time-varying distortions derived from several impairments such as IQ imbalance, bias drift, phase noise and interchannel interference. Experimental results show that those impairments cause centroid movements in the received constellations seen in time-windows of 10k symbols in controlled scenarios. In our demodulation technique, the k-means algorithm iteratively identifies the cluster centroids in the constellation of the received symbols in short time windows by means of the optimization of decision thresholds for a minimum BER. We experimentally verified the effectiveness of this computationally efficient technique in multicarrier 16QAM Nyquist-WDM systems over 270 km links. Our nonsymmetrical demodulation technique outperforms the conventional QAM demodulation technique, reducing the OSNR requirement up to ∼0.8 dB at a BER of 1 × 10-2 for signals affected by interchannel interference.

  1. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.

    1973-01-01

    A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.

  2. Turbulence Generation Using Localized Sources of Energy: Direct Numerical Simulations and the Effects of Thermal Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Maqui, Agustin Francisco

    Turbulence in high-speed flows is an important problem in aerospace applications, yet extremely difficult from a theoretical, computational and experimental perspective. A main reason for the lack of complete understanding is the difficulty of generating turbulence in the lab at a range of speeds which can also include hypersonic effects such as thermal non-equilibrium. This work studies the feasibility of a new approach to generate turbulence based on laser-induced photo-excitation/dissociation of seeded molecules. A large database of incompressible and compressible direct numerical simulations (DNS) has been generated to systematically study the development and evolution of the flow towards realistic turbulence. Governing parameters and the conditions necessary for the establishment of turbulence, as well as the length and time scales associated with such process, are identified. For both the compressible and incompressible experiments a minimum Reynolds number is found to be needed for the flow to evolve towards fully developed turbulence. Additionally, for incompressible cases a minimum time scale is required, while for compressible cases a minimum distance from the grid and limit on the maximum temperature introduced are required. Through an extensive analysis of single and two point statistics, as well as spectral dynamics, the primary mechanisms leading to turbulence are shown. As commonly done in compressible turbulence, dilatational and solenoidal components are separated to understand the effect of acoustics on the development of turbulence. Finally, a large database of forced isotropic turbulence has been generated to study the effect of internal degrees of freedom on the evolution of turbulence.

  3. 25 CFR 542.9 - What are the minimum internal control standards for card games?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...

  4. 25 CFR 542.9 - What are the minimum internal control standards for card games?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...

  5. 25 CFR 542.12 - What are the minimum internal control standards for table games?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...

  6. 25 CFR 542.9 - What are the minimum internal control standards for card games?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...

  7. 25 CFR 542.12 - What are the minimum internal control standards for table games?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...

  8. 25 CFR 542.12 - What are the minimum internal control standards for table games?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...

  9. 25 CFR 542.9 - What are the minimum internal control standards for card games?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...

  10. 25 CFR 542.9 - What are the minimum internal control standards for card games?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... games? 542.9 Section 542.9 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... card games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the card game drop and the count thereof shall comply...

  11. 25 CFR 542.12 - What are the minimum internal control standards for table games?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...

  12. 25 CFR 542.12 - What are the minimum internal control standards for table games?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... games? 542.12 Section 542.12 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN... table games? (a) Computer applications. For any computer applications utilized, alternate documentation... and count. The procedures for the collection of the table game drop and the count thereof shall comply...

  13. 12 CFR 226.6 - Account-opening disclosures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... compute the finance charge, the range of balances to which it is applicable,11 and the corresponding... required to adjust the range of balances disclosure to reflect the balance below which only a minimum... balance on which the finance charge may be computed. (iv) An explanation of how the amount of any finance...

  14. Mathematics Objectives and Measurement Specifications 1986-1990. Exit Level. Texas Educational Assessment of Minimum Skills (TEAMS).

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Div. of Educational Assessment.

    This document lists the objectives for the Texas educational assessment program in mathematics. Eighteen objectives for exit level mathematics are listed, by category: number concepts (4); computation (3); applied computation (5); statistical concepts (3); geometric concepts (2); and algebraic concepts (1). Then general specifications are listed…

  15. Computer optimization of cutting yield from multiple ripped boards

    Treesearch

    A.R. Stern; K.A. McDonald

    1978-01-01

    RIPYLD is a computer program that optimizes the cutting yield from multiple-ripped boards. Decisions are based on automatically collected defect information, cutting bill requirements, and sawing variables. The yield of clear cuttings from a board is calculated for every possible permutation of specified rip widths and both the maximum and minimum percent yield...

  16. 46 CFR 42.25-20 - Computation for freeboard.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...

  17. 46 CFR 42.25-20 - Computation for freeboard.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...

  18. 46 CFR 42.25-20 - Computation for freeboard.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...

  19. 46 CFR 42.25-20 - Computation for freeboard.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...

  20. 46 CFR 42.25-20 - Computation for freeboard.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... freeboard. (a) The minimum summer freeboards must be computed in accordance with §§ 42.20-5 (a) and (b), 42... the summer timber freeboard one thirty-sixth of the molded summer timber draft. (c) The winter North...(d)(1). (d) The tropical timber freeboard shall be obtained by deducting from the summer timber...

  1. Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes

    NASA Technical Reports Server (NTRS)

    Heiser, P. C.; Ricks, L. O.

    1968-01-01

    Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.

  2. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  3. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  4. Directed translocation of a flexible polymer through a cone-shaped nano-channel

    NASA Astrophysics Data System (ADS)

    Nikoofard, Narges; Khalilian, Hamidreza; Fazli, Hossein

    2013-08-01

    Translocation of a flexible polymer through a cone-shaped channel is studied, theoretically and using computer simulations. Our simulations show that the shape of the channel causes the polymer translocation to be a driven process. The effective driving force of entropic origin acting on the polymer is calculated as a function of the length and the apex-angle of the channel, theoretically. It is found that the translocation time is a non-monotonic function of the apex-angle of the channel. By increasing the apex-angle from zero, the translocation time shows a minimum and then a maximum. Also, it is found that regardless of the value of the apex-angle, the translocation time is a uniformly decreasing function of the channel length. The results of the theory and the simulation are in good qualitative agreement.

  5. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  6. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  7. Strengthened MILP formulation for certain gas turbine unit commitment problems

    DOE PAGES

    Pan, Kai; Guan, Yongpei; Watson, Jean -Paul; ...

    2015-05-22

    In this study, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these typesmore » of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.« less

  8. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  9. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs.

    PubMed

    Abbas, Adel Taha; Pimenov, Danil Yurievich; Erdakov, Ivan Nikolaevich; Taha, Mohamed Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-05-16

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth⁻Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness ( Ra ) prediction of one component in computer numerical control (CNC) turning over minimal machining time ( T m ) and at prime machining costs ( C ). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra , T m , and C , in relation to cutting speed, v c , depth of cut, a p , and feed per revolution, f r . For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values v c , a p , and f r . The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, T m = 0.358 min/cm³, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed v c = 250 m/min, cutting depth a p = 1.0 mm, and feed per revolution f r = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  10. Effect of suspension kinematic on 14 DOF vehicle model

    NASA Astrophysics Data System (ADS)

    Wongpattananukul, T.; Chantharasenawong, C.

    2017-12-01

    Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.

  11. Biomarker selection and classification of "-omics" data using a two-step bayes classification framework.

    PubMed

    Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades

    2013-01-01

    Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.

  12. C-semiring Frameworks for Minimum Spanning Tree Problems

    NASA Astrophysics Data System (ADS)

    Bistarelli, Stefano; Santini, Francesco

    In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.

  13. Determination of Vertical Borehole and Geological Formation Properties using the Crossed Contour Method

    PubMed Central

    Leyde, Brian P.; Klein, Sanford A; Nellis, Gregory F.; Skye, Harrison

    2017-01-01

    This paper presents a new method called the Crossed Contour Method for determining the effective properties (borehole radius and ground thermal conductivity) of a vertical ground-coupled heat exchanger. The borehole radius is used as a proxy for the overall borehole thermal resistance. The method has been applied to both simulated and experimental borehole Thermal Response Test (TRT) data using the Duct Storage vertical ground heat exchanger model implemented in the TRansient SYstems Simulation software (TRNSYS). The Crossed Contour Method generates a parametric grid of simulated TRT data for different combinations of borehole radius and ground thermal conductivity in a series of time windows. The error between the average of the simulated and experimental bore field inlet and outlet temperatures is calculated for each set of borehole properties within each time window. Using these data, contours of the minimum error are constructed in the parameter space of borehole radius and ground thermal conductivity. When all of the minimum error contours for each time window are superimposed, the point where the contours cross (intersect) identifies the effective borehole properties for the model that most closely represents the experimental data in every time window and thus over the entire length of the experimental data set. The computed borehole properties are compared with results from existing model inversion methods including the Ground Property Measurement (GPM) software developed by Oak Ridge National Laboratory, and the Line Source Model. PMID:28785125

  14. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  15. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  16. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  17. Energy star. (Latest citations from the Computer database). Published Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 81 citations and includes a subject term index and title list.)

  18. Energy star. (Latest citations from the Computer database). Published Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The bibliography contains citations concerning a collaborative effort between the Environmental Protection Agency (EPA) and private industry to reduce electrical power consumed by personal computers and related peripherals. Manufacturers complying with EPA guidelines are officially recognized by award of a special Energy Star logo, and are referred to in official documents as a vendor of green computers. (Contains a minimum of 234 citations and includes a subject term index and title list.)

  19. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  20. AEGIS: a wildfire prevention and management information system

    NASA Astrophysics Data System (ADS)

    Kalabokidis, K.; Ager, A.; Finney, M.; Athanasis, N.; Palaiologou, P.; Vasilakos, C.

    2015-10-01

    A Web-GIS wildfire prevention and management platform (AEGIS) was developed as an integrated and easy-to-use decision support tool (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing access to information that is essential for wildfire management. Databases were created with spatial and non-spatial data to support key system functionalities. Updated land use/land cover maps were produced by combining field inventory data with high resolution multispectral satellite images (RapidEye) to be used as inputs in fire propagation modeling with the Minimum Travel Time algorithm. End users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations; i.e. single-fire propagations, conditional burn probabilities and at the landscape-level, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANN) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps produced an integrated output map for fire danger prediction. The system also incorporates weather measurements from remote automatic weather stations and weather forecast maps. The structure of the algorithms relies on parallel processing techniques (i.e. High Performance Computing and Cloud Computing) that ensure computational power and speed. All AEGIS functionalities are accessible to authorized end users through a web-based graphical user interface. An innovative mobile application, AEGIS App, acts as a complementary tool to the web-based version of the system.

  1. Ascent velocity and dynamics of the Fiumicino mud eruption, Rome, Italy

    NASA Astrophysics Data System (ADS)

    Vona, A.; Giordano, G.; De Benedetti, A. A.; D'Ambrosio, R.; Romano, C.; Manga, M.

    2015-08-01

    In August 2013 drilling triggered the eruption of mud near the international airport of Fiumicino (Rome, Italy). We monitored the evolution of the eruption and collected samples for laboratory characterization of physicochemical and rheological properties. Over time, muds show a progressive dilution with water; the rheology is typical of pseudoplastic fluids, with a small yield stress that decreases as mud density decreases. The eruption, while not naturally triggered, shares several similarities with natural mud volcanoes, including mud componentry, grain-size distribution, gas discharge, and mud rheology. We use the size of large ballistic fragments ejected from the vent along with mud rheology to compute a minimum ascent velocity of the mud. Computed values are consistent with in situ measurements of gas phase velocities, confirming that the stratigraphic record of mud eruptions can be quantitatively used to infer eruption history and ascent rates and hence to assess (or reassess) mud eruption hazards.

  2. Apollo lunar descent guidance

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1974-01-01

    Apollo lunar-descent guidance transfers the Lunar Module from a near-circular orbit to touchdown, traversing a 17 deg central angle and a 15 km altitude in 11 min. A group of interactive programs in an onboard computer guide the descent, controlling altitude and the descent propulsion system throttle. A ground-based program pre-computes guidance targets. The concepts involved in this guidance are described. Explicit and implicit guidance are discussed, guidance equations are derived, and the earlier Apollo explicit equation is shown to be an inferior special case of the later implicit equation. Interactive guidance, by which the two-man crew selects a landing site in favorable terrain and directs the trajectory there, is discussed. Interactive terminal-descent guidance enables the crew to control the essentially vertical descent rate in order to land in minimum time with safe contact speed. The altitude maneuver routine uses concepts that make gimbal lock inherently impossible.

  3. GEMPAK: An arbitrary aircraft geometry generator

    NASA Technical Reports Server (NTRS)

    Stack, S. H.; Edwards, C. L. W.; Small, W. J.

    1977-01-01

    A computer program, GEMPAK, has been developed to aid in the generation of detailed configuration geometry. The program was written to allow the user as much flexibility as possible in his choices of configurations and the detail of description desired and at the same time keep input requirements and program turnaround and cost to a minimum. The program consists of routines that generate fuselage and planar-surface (winglike) geometry and a routine that will determine the true intersection of all components with the fuselage. This paper describes the methods by which the various geometries are generated and provides input description with sample input and output. Also included are descriptions of the primary program variables and functions performed by the various routines. The FORTRAN program GEMPAK has been used extensively in conjunction with interfaces to several aerodynamic and plotting computer programs and has proven to be an effective aid in the preliminary design phase of aircraft configurations.

  4. Running SINDA '85/FLUINT interactive on the VAX

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris

    1992-01-01

    Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.

  5. Evaluation of a Computer-Based Training Program for Enhancing Arithmetic Skills and Spatial Number Representation in Primary School Children.

    PubMed

    Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael

    2016-01-01

    Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.

  6. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  7. Fast prediction of RNA-RNA interaction using heuristic algorithm.

    PubMed

    Montaseri, Soheila

    2015-01-01

    Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.

  8. Municipal solid waste transportation optimisation with vehicle routing approach: case study of Pontianak City, West Kalimantan

    NASA Astrophysics Data System (ADS)

    Kamal, M. A.; Youlla, D.

    2018-03-01

    Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.

  9. Determination of the structure of subsurface layers by means of coaxial time-of-flight scattering and recoiling spectrometry (TOF-SARS)

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Teplov, S. V.; Rabalais, J. W.

    1994-05-01

    It is demonstrated that both surface and subsurface structural information can be obtained from Si{100}-(2 × 1) and Si{100}-(1 × 1)-H by coupling coaxial time-of-flight scattering and recoiling spectrometry (TOF-SARS) with three-dimensional trajectory simulations. Experimentally, backscattering intensity versus incident α angle scans at a scattering angle of ˜ 180° have been measured for 2 keV He + incident on both the (2 × 1) and (1 × 1)-H surfaces. Computationally, an efficient three-dimensional version of the Monte Carlo computer code RECAD has been developed and applied to simulation of the TOF-SARS results. An R (reliability) factor has been introduced for quantitative evaluation of the agreement between experimental and simulated scans. For the case of 2 keV He + scattering from Si{100}, scattering features can be observed and delineated from as many as 14 atomic layers ( ˜ 18 Å) below the surface. The intradimer spacing D is determined as 2.2 Å from the minimum in the R-factor versus D plot.

  10. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  11. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  12. Computer-aided position planning of miniplates to treat facial bone defects

    PubMed Central

    Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter

    2017-01-01

    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon’s desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time. PMID:28817607

  13. Computer-aided position planning of miniplates to treat facial bone defects.

    PubMed

    Egger, Jan; Wallner, Jürgen; Gall, Markus; Chen, Xiaojun; Schwenzer-Zimmerer, Katja; Reinbacher, Knut; Schmalstieg, Dieter

    2017-01-01

    In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.

  14. PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Gorti, Sarma B; Peter, William H

    A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less

  15. Validation of the Kp Geomagnetic Index Forecast at CCMC

    NASA Astrophysics Data System (ADS)

    Frechette, B. P.; Mays, M. L.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.

  16. Writing with Computers in ESL Classroom: Enhancing ESL Learners' Motivation, Confidence and Writing Proficiency

    ERIC Educational Resources Information Center

    Hadi, Marham Jupri

    2013-01-01

    Researcher's observation on his ESL class indicates the main issues concerning the writing skills: learners' low motivation to write, minimum interaction in writing, and poor writing skills. These limitations have led them to be less confidence to write in English. This article discusses how computers can be used for the purpose of increasing…

  17. 29 CFR Appendix B to Part 510 - Nonmanufacturing Industries Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... 7374 1 Computer processing and data preparation and processing services. 7379 1 Computer related... industries (except those in major groups 01, 02, 08, and 09, pertaining to agriculture) for which data were... incorporated by reference in these regulations (§ 510.21). The data in this appendix are presented by major...

  18. 29 CFR Appendix B to Part 510 - Nonmanufacturing Industries Eligible for Minimum Wage Phase-In

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... 7374 1 Computer processing and data preparation and processing services. 7379 1 Computer related... industries (except those in major groups 01, 02, 08, and 09, pertaining to agriculture) for which data were... incorporated by reference in these regulations (§ 510.21). The data in this appendix are presented by major...

  19. 40 CFR 63.1257 - Test methods and compliance procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...

  20. 40 CFR 63.1257 - Test methods and compliance procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...

Top