Sample records for brute force simulations

  1. Analysis of brute-force break-ins of a palmprint authentication system.

    PubMed

    Kong, Adams W K; Zhang, David; Kamel, Mohamed

    2006-10-01

    Biometric authentication systems are widely applied because they offer inherent advantages over classical knowledge-based and token-based personal-identification approaches. This has led to the development of products using palmprints as biometric traits and their use in several real applications. However, as biometric systems are vulnerable to replay, database, and brute-force attacks, such potential attacks must be analyzed before biometric systems are massively deployed in security systems. This correspondence proposes a projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system. To validate the proposed model, we have conducted a simulation. Its results demonstrate that the proposed model can accurately estimate the probability. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks.

  2. Evaluation of simulation alternatives for the brute-force ray-tracing approach used in backlight design

    NASA Astrophysics Data System (ADS)

    Desnijder, Karel; Hanselaer, Peter; Meuret, Youri

    2016-04-01

    A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.

  3. Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung

    2016-07-01

    Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.

  4. Computer Program Development Specification for IDAMST Operational Flight Program Application, Software Type B5. Addendum 1.

    DTIC Science & Technology

    1976-07-30

    Interface Requirements 4 3.1.1.1 Interface Block Diagram 4 3.1.1.2 Detailed Interface Definition 7 3.1.1.2.1 Subsystems 7 3.1.1.2.2 Controls & Displays 11 r...116 3.2.3.2 Navigation Brute Force 121 3.2.3.3 Cargo Brute Force 125 3.2.3.4 Sensor Brute Force 129 3.2.3.5 Controls /Displays Brute Force 135 3.2.3.6...STD-T553 Multiplex Data Bus, with the avionic subsystems, flight * control system, the controls /displays, engine sensors, and airframe sensors. 3.1

  5. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  6. Nonconservative dynamics in long atomic wires

    NASA Astrophysics Data System (ADS)

    Cunningham, Brian; Todorov, Tchavdar N.; Dundas, Daniel

    2014-09-01

    The effect of nonconservative current-induced forces on the ions in a defect-free metallic nanowire is investigated using both steady-state calculations and dynamical simulations. Nonconservative forces were found to have a major influence on the ion dynamics in these systems, but their role in increasing the kinetic energy of the ions decreases with increasing system length. The results illustrate the importance of nonconservative effects in short nanowires and the scaling of these effects with system size. The dependence on bias and ion mass can be understood with the help of a simple pen and paper model. This material highlights the benefit of simple preliminary steady-state calculations in anticipating aspects of brute-force dynamical simulations, and provides rule of thumb criteria for the design of stable quantum wires.

  7. Near-Neighbor Algorithms for Processing Bearing Data

    DTIC Science & Technology

    1989-05-10

    neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N

  8. How to Run FAST Simulations.

    PubMed

    Zimmerman, M I; Bowman, G R

    2016-01-01

    Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.

  9. Quaternion normalization in additive EKF for spacecraft attitude determination. [Extended Kalman Filters

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter to spacecraft attitude determination, which is based on vector measurements. Three new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstate the performance of all four schemes.

  10. Vulnerability Analysis of the MAVLink Protocol for Command and Control of Unmanned Aircraft

    DTIC Science & Technology

    2013-03-27

    the cheapest computers currently on the market (the $35 Raspberry Pi [New13, Upt13]) to distribute the workload, a determined attacker would incur a...cCost of Brute-Force) for 6,318 Raspberry Pi systems (x) at $82 per 3DR-enabled Raspberry Pi (RPCost of RasPi) [3DR13, New13] to brute-force all 3,790,800...NIST, 2004. [New13] Newark. Order the Raspberry Pi , November 2013. last accessed: 19 Febru- ary 2014. URL: http://www.newark.com/jsp/search

  11. Selectivity trend of gas separation through nanoporous graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hongjun; Chen, Zhongfang; Dai, Sheng

    2014-01-29

    We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less

  12. The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images

    NASA Astrophysics Data System (ADS)

    Fedotova, S.; Seredin, O.; Kushnir, O.

    2017-05-01

    In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.

  13. Homogeneous nucleation in supersaturated vapors of methane, ethane, and carbon dioxide predicted by brute force molecular dynamics.

    PubMed

    Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans

    2008-04-28

    Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.

  14. Virtual ellipsometry on layered micro-facet surfaces.

    PubMed

    Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas

    2017-09-18

    Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.

  15. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  16. Permeation profiles of Antibiotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez Bautista, Cesar Augusto; Gnanakaran, Sandrasegaram

    Presentation describes motivation: Combating bacterial inherent resistance; Drug development mainly uses brute force rather than rational design; Current experimental approaches lack molecular detail.

  17. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  18. An N-body Integrator for Planetary Rings

    NASA Astrophysics Data System (ADS)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  19. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  20. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  1. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    NASA Astrophysics Data System (ADS)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  2. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  3. Shipboard Fluid System Diagnostics Using Non-Intrusive Load Monitoring

    DTIC Science & Technology

    2007-06-01

    brute.s(3).data; tDPP = brute.s(3).time; FL = brute.s(4).data; tFL = brute.s(4).time; RM = brute.s(5).data; tRM = brute.s(5).time; DPF = brute.s...s’, max(tP1), files(n).name)); ylabel(’Power’); axis tight grid on; subplot(4,1,2); plot( tDPP , DPP, tDPF, DPF) ylabel(’DP Gauges’); axis

  4. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  5. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  6. Single realization stochastic FDTD for weak scattering waves in biological random media.

    PubMed

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2013-02-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.

  7. Single realization stochastic FDTD for weak scattering waves in biological random media

    PubMed Central

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2015-01-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153

  8. Grover Search and the No-Signaling Principle

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bouland, Adam; Jordan, Stephen P.

    2016-09-01

    Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.

  9. Crystal nucleation and metastable bcc phase in charged colloids: A molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Ji, Xinqiang; Sun, Zhiwei; Ouyang, Wenze; Xu, Shenghua

    2018-05-01

    The dynamic process of homogenous nucleation in charged colloids is investigated by brute-force molecular dynamics simulation. To check if the liquid-solid transition will pass through metastable bcc, simulations are performed at the state points that definitely lie in the phase region of thermodynamically stable fcc. The simulation results confirm that, in all of these cases, the preordered precursors, acting as the seeds of nucleation, always have predominant bcc symmetry consistent with Ostwald's step rule and the Alexander-McTague mechanism. However, the polymorph selection is not straightforward because the crystal structures formed are not often determined by the symmetry of intermediate precursors but have different characters under different state points. The region of the state point where bcc crystal structures of large enough size are formed during crystallization is narrow, which gives a reasonable explanation as to why the metastable bcc phase in charged colloidal suspensions is rarely detected in macroscopic experiments.

  10. Simulation of linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.

    1993-01-01

    A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.

  11. A Newton-Krylov solver for fast spin-up of online ocean tracers

    NASA Astrophysics Data System (ADS)

    Lindsay, Keith

    2017-01-01

    We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.

  12. Reconstructing the evolution of first-row transition metal minerals by GeoDeepDive

    NASA Astrophysics Data System (ADS)

    Liu, C.; Peters, S. E.; Ross, I.; Golden, J. J.; Downs, R. T.; Hazen, R. M.

    2016-12-01

    Terrestrial mineralogy evolves as a consequence of a range of physical, chemical, and biological processes [1]. Evolution of the first-row transition metal minerals could mirror the evolution of Earth's oxidation state and life, since these elements mostly are redox-sensitive and/or play critical roles in biology. The fundamental building blocks to reconstruct mineral evolution are the mineral species, locality, and age data, which are typically dispersed in sentences in scientific and technical publications. These data can be tracked down in a brute-force way, i.e., human retrieval, reading, and recording all relevant literature. Alternatively, they can be extracted automatically by GeoDeepDive. In GeoDeepDive, scientific and technical articles from publishers, including Elsevier, Wiley, USGS, SEPM, GSA and Canada Science Publishing, have been parsed into a Javascript database with NLP tags. Sentences containing data of mineral names, locations, and ages can be recognized and extracted by user-developed applications. In a preliminary search for cobalt mineral ages, we successfully extracted 678 citations with >1000 mentions of cobalt minerals, their locations, and ages. The extracted results are in agreement with brute-force search results. What is more, GeoDeepDive provides 40 additional data points that were not recovered by the brute-force approach. The extracted mineral locality-age data suggest that the evolution of Co minerals is controlled by global supercontinent cycles, i.e., more Co minerals form during episodes of supercontinent assembly. Mineral evolution of other first-row transition elements is being investigated through GeoDeepDive. References: [1] Hazen et al. (2008) Mineral evolution. American Mineralogist, 93, 1693-1720

  13. Finding All Solutions to the Magic Hexagram

    ERIC Educational Resources Information Center

    Holland, Jason; Karabegov, Alexander

    2008-01-01

    In this article, a systematic approach is given for solving a magic star puzzle that usually is accomplished by trial and error or "brute force." A connection is made to the symmetries of a cube, thus the name Magic Hexahedron.

  14. Probabilistic sampling of protein conformations: new hope for brute force?

    PubMed

    Feldman, Howard J; Hogue, Christopher W V

    2002-01-01

    Protein structure prediction from sequence alone by "brute force" random methods is a computationally expensive problem. Estimates have suggested that it could take all the computers in the world longer than the age of the universe to compute the structure of a single 200-residue protein. Here we investigate the use of a faster version of our FOLDTRAJ probabilistic all-atom protein-structure-sampling algorithm. We have improved the method so that it is now over twenty times faster than originally reported, and capable of rapidly sampling conformational space without lattices. It uses geometrical constraints and a Leonard-Jones type potential for self-avoidance. We have also implemented a novel method to add secondary structure-prediction information to make protein-like amounts of secondary structure in sampled structures. In a set of 100,000 probabilistic conformers of 1VII, 1ENH, and 1PMC generated, the structures with smallest Calpha RMSD from native are 3.95, 5.12, and 5.95A, respectively. Expanding this test to a set of 17 distinct protein folds, we find that all-helical structures are "hit" by brute force more frequently than beta or mixed structures. For small helical proteins or very small non-helical ones, this approach should have a "hit" close enough to detect with a good scoring function in a pool of several million conformers. By fitting the distribution of RMSDs from the native state of each of the 17 sets of conformers to the extreme value distribution, we are able to estimate the size of conformational space for each. With a 0.5A RMSD cutoff, the number of conformers is roughly 2N where N is the number of residues in the protein. This is smaller than previous estimates, indicating an average of only two possible conformations per residue when sterics are accounted for. Our method reduces the effective number of conformations available at each residue by probabilistic bias, without requiring any particular discretization of residue conformational space, and is the fastest method of its kind. With computer speeds doubling every 18 months and parallel and distributed computing becoming more practical, the brute force approach to protein structure prediction may yet have some hope in the near future. Copyright 2001 Wiley-Liss, Inc.

  15. Exhaustively sampling peptide adsorption with metadynamics.

    PubMed

    Deighan, Michael; Pfaendtner, Jim

    2013-06-25

    Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.

  16. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  17. Toward Determining ATPase Mechanism in ABC Transporters: Development of the Reaction Path–Force Matching QM/MM Method

    PubMed Central

    Zhou, Y.; Ojeda-May, P.; Nagaraju, M.; Pu, J.

    2016-01-01

    Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path–force matching (RP–FM) has been developed. In RP–FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP–FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639

  18. Uncovering molecular processes in crystal nucleation and growth by using molecular simulation.

    PubMed

    Anwar, Jamshed; Zahn, Dirk

    2011-02-25

    Exploring nucleation processes by molecular simulation provides a mechanistic understanding at the atomic level and also enables kinetic and thermodynamic quantities to be estimated. However, whilst the potential for modeling crystal nucleation and growth processes is immense, there are specific technical challenges to modeling. In general, rare events, such as nucleation cannot be simulated using a direct "brute force" molecular dynamics approach. The limited time and length scales that are accessible by conventional molecular dynamics simulations have inspired a number of advances to tackle problems that were considered outside the scope of molecular simulation. While general insights and features could be explored from efficient generic models, new methods paved the way to realistic crystal nucleation scenarios. The association of single ions in solvent environments, the mechanisms of motif formation, ripening reactions, and the self-organization of nanocrystals can now be investigated at the molecular level. The analysis of interactions with growth-controlling additives gives a new understanding of functionalized nanocrystals and the precipitation of composite materials. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Bandwidth variable transceivers with artificial neural network-aided provisioning and capacity improvement capabilities in meshed optical networks with cascaded ROADM filtering

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Zhuge, Qunbi; Qiu, Meng; Xiang, Meng; Zhang, Fangyuan; Wu, Baojian; Qiu, Kun; Plant, David V.

    2018-02-01

    We investigate the capacity improvement achieved by bandwidth variable transceivers (BVT) in meshed optical networks with cascaded ROADM filtering at fixed channel spacing, and then propose an artificial neural network (ANN)-aided provisioning scheme to select optimal symbol rate and modulation format for the BVTs in this scenario. Compared with a fixed symbol rate transceiver with standard QAMs, it is shown by both experiments and simulations that BVTs can increase the average capacity by more than 17%. The ANN-aided BVT provisioning method uses parameters monitored from a coherent receiver and then employs a trained ANN to transform these parameters into the desired configuration. It is verified by simulation that the BVT with the proposed provisioning method can approach the upper limit of the system capacity obtained by brute-force search under various degrees of flexibilities.

  20. Chemical reaction mechanisms in solution from brute force computational Arrhenius plots.

    PubMed

    Kazemi, Masoud; Åqvist, Johan

    2015-06-01

    Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies.

  1. Chemical reaction mechanisms in solution from brute force computational Arrhenius plots

    PubMed Central

    Kazemi, Masoud; Åqvist, Johan

    2015-01-01

    Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies. PMID:26028237

  2. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  3. Free energy surface of an intrinsically disordered protein: comparison between temperature replica exchange molecular dynamics and bias-exchange metadynamics.

    PubMed

    Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain

    2015-06-09

    Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.

  4. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  5. Quaternion normalization in additive EKF for spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.

  6. Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients

    NASA Astrophysics Data System (ADS)

    Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.

    2017-09-01

    We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.

  7. Brute-Force Approach for Mass Spectrometry-Based Variant Peptide Identification in Proteogenomics without Personalized Genomic Data

    NASA Astrophysics Data System (ADS)

    Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.

    2018-02-01

    In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.

  8. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  9. Studies on a Spatialized Audio Interface for Sonar

    DTIC Science & Technology

    2011-10-03

    addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a

  10. Step to improve neural cryptography against flipping attacks.

    PubMed

    Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold

    2004-12-01

    Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.

  11. Vector Potential Generation for Numerical Relativity Simulations

    NASA Astrophysics Data System (ADS)

    Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian

    2017-01-01

    Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436

  12. Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.

    PubMed

    Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai

    2014-07-02

    We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.

  13. Examining single-source secondary impacts estimated from brute-force, decoupled direct method, and advanced plume treatment approaches

    EPA Science Inventory

    In regulatory assessments, there is a need for reliable estimates of the impacts of precursor emissions from individual sources on secondary PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microns) and ozone. Three potential methods for estimating th...

  14. The End of Flat Earth Economics & the Transition to Renewable Resource Societies.

    ERIC Educational Resources Information Center

    Henderson, Hazel

    1978-01-01

    A post-industrial revolution is predicted for the future with an accompanying shift of focus from simple, brute force technolgies, based on cheap, accessible resources and energy, to a second generation of more subtle, refined technologies grounded in a much deeper understanding of biological and ecological realities. (Author/BB)

  15. Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology

    PubMed Central

    Palaparthi, Anil; Riede, Tobias

    2017-01-01

    Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563

  16. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  17. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  18. Nuclear spin imaging with hyperpolarized nuclei created by brute force method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayoshi; Kunimatsu, Takayuki; Fujiwara, Mamoru; Kohri, Hideki; Ohta, Takeshi; Utsuro, Masahiko; Yosoi, Masaru; Ono, Satoshi; Fukuda, Kohji; Takamatsu, Kunihiko; Ueda, Kunihiro; Didelez, Jean-P.; Prossati, Giorgio; de Waard, Arlette

    2011-05-01

    We have been developing a polarized HD target for particle physics at the SPring-8 under the leadership of the RCNP, Osaka University for the past 5 years. Nuclear polarizaton is created by means of the brute force method which uses a high magnetic field (~17 T) and a low temperature (~ 10 mK). As one of the promising applications of the brute force method to life sciences we started a new project, "NSI" (Nuclear Spin Imaging), where hyperpolarized nuclei are used for the MRI (Magnetic Resonance Imaging). The candidate nuclei with spin ½hslash are 3He, 13C, 15N, 19F, 29Si, and 31P, which are important elements for the composition of the biomolecules. Since the NMR signals from these isotopes are enhanced by orders of magnitudes, the spacial resolution in the imaging would be much more improved compared to the practical MRI used so far. Another advantage of hyperpolarized MRI is that the MRI is basically free from the radiation, while the problems of radiation exposure caused by the X-ray CT or PET (Positron Emission Tomography) cannot be neglected. In fact, the risk of cancer for Japanese due to the radiation exposure through these diagnoses is exceptionally high among the advanced countries. As the first step of the NSI project, we are developing a system to produce hyperpolarized 3He gas for the diagnosis of serious lung diseases, for example, COPD (Chronic Obstructive Pulmonary Disease). The system employs the same 3He/4He dilution refrigerator and superconducting solenoidal coil as those used for the polarized HD target with some modification allowing the 3He Pomeranchuk cooling and the following rapid melting of the polarized solid 3He to avoid the depolarization. In this report, the present and future steps of our project will be outlined with some latest experimental results.

  19. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  20. Social Epistemology, the Reason of "Reason" and the Curriculum Studies

    ERIC Educational Resources Information Center

    Popkewitz, Thomas S.

    2014-01-01

    Not-with-standing the current topoi of the Knowledge Society, a particular "fact" of modernity is that power is exercised less through brute force and more through systems of reason that order and classify what is known and acted on. This article explored the system of reason that orders and classifies what is talked about, thought and…

  1. Managing conflicts in systems development.

    PubMed

    Barnett, E

    1997-05-01

    Conflict in systems development is nothing new. It can vary in intensity, but there will always be two possible outcomes--one constructive and the other destructive. The common approach to conflict management is to draw the battle lines and apply brute force. However, there are other ways to deal with conflict that are more effective and more people oriented.

  2. Code White: A Signed Code Protection Mechanism for Smartphones

    DTIC Science & Technology

    2010-09-01

    analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2

  3. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation.

    PubMed

    Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D

    2018-02-22

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).

  4. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation

    PubMed Central

    Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.

    2018-01-01

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448

  5. The United States and India in the Post-Soviet World: Proceedings of the Indo-U.S. Strategic Symposium

    DTIC Science & Technology

    1993-04-23

    mechanisms that take into account this new reality. TERRORISM Lastly is the question of terrorism. There can be no two opinions on this most heinous crime ...the notion of an empire "essentially based on force" that had to be maintained, if necessary, "by brute force" see Suhash Chakravarty, The Raj Syndrome ...over power to the National League for Democracy (NLD) led by Aung San Suu Xyi , the daughter of Burma’s independence leader, Aung San. Since then, the

  6. Unsteady flow sensing and optimal sensor placement using machine learning

    NASA Astrophysics Data System (ADS)

    Semaan, Richard

    2016-11-01

    Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.

  7. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  8. Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.

    2017-03-01

    We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.

  9. Gaussian mass optimization for kernel PCA parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Wang, Zulin

    2011-10-01

    This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.

  10. A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao

    2018-04-01

    A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.

  11. Neural-network quantum state tomography

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe

    2018-05-01

    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.

  12. Optical image encryption system using nonlinear approach based on biometric authentication

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Sinha, Aloka

    2017-07-01

    A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.

  13. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  14. Ordering effects of conjugate thermal fields in simulations of molecular liquids: Carbon dioxide and water

    NASA Astrophysics Data System (ADS)

    Dittmar, Harro R.; Kusalik, Peter G.

    2016-10-01

    As shown previously, it is possible to apply configurational and kinetic thermostats simultaneously in order to induce a steady thermal flux in molecular dynamics simulations of many-particle systems. This flux appears to promote motion along potential gradients and can be utilized to enhance the sampling of ordered arrangements, i.e., it can facilitate the formation of a critical nucleus. Here we demonstrate that the same approach can be applied to molecular systems, and report a significant enhancement of the homogeneous crystal nucleation of a carbon dioxide (EPM2 model) system. Quantitative ordering effects and reduction of the particle mobilities were observed in water (TIP4P-2005 model) and carbon dioxide systems. The enhancement of the crystal nucleation of carbon dioxide was achieved with relatively small conjugate thermal fields. The effect is many orders of magnitude bigger at milder supercooling, where the forward flux sampling method was employed, than at a lower temperature that enabled brute force simulations of nucleation events. The behaviour exhibited implies that the effective free energy barrier of nucleation must have been reduced by the conjugate thermal field in line with our interpretation of previous results for atomic systems.

  15. A Site Density Functional Theory for Water: Application to Solvation of Amino Acid Side Chains.

    PubMed

    Liu, Yu; Zhao, Shuangliang; Wu, Jianzhong

    2013-04-09

    We report a site density functional theory (SDFT) based on the conventional atomistic models of water and the universality ansatz of the bridge functional. The excess Helmholtz energy functional is formulated in terms of a quadratic expansion with respect to the local density deviation from that of a uniform system and a universal functional for all higher-order terms approximated by that of a reference hard-sphere system. With the atomistic pair direct correlation functions of the uniform system calculated from MD simulation and an analytical expression for the bridge functional from the modified fundamental measure theory, the SDFT can be used to predict the structure and thermodynamic properties of water under inhomogeneous conditions with a computational cost negligible in comparison to that of brute-force simulations. The numerical performance of the SDFT has been demonstrated with the predictions of the solvation free energies of 15 molecular analogs of amino acid side chains in water represented by SPC/E, SPC, and TIP3P models. For theTIP3P model, a comparison of the theoretical predictions with MD simulation and experimental data shows agreement within 0.64 and 1.09 kcal/mol on average, respectively.

  16. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  17. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  18. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  19. Strategic Studies Quarterly. Volume 9, Number 2. Summer 2015

    DTIC Science & Technology

    2015-01-01

    disrupting financial markets. Among other indicators, China’s already deployed and future Type 094 Jin -ciass nuclear ballistic missile submarines (SSBN...on agility instead of brute force re- inforces traditional Chinese military thinking. Since Sun Tzu, the acme of skill has been winning without... mechanical (both political and technical) nature of digital developments. Given this, the nature of system constraints under a dif- ferent future

  20. Portable Language-Independent Adaptive Translation from OCR. Phase 1

    DTIC Science & Technology

    2009-04-01

    including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation

  1. Challenges in the development of very high resolution Earth System Models for climate science

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun

    2017-04-01

    The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.

  2. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  3. CAD/CAM Helps Build Better Bots: High-Tech Design and Manufacture Draws Engineering-Oriented Students

    ERIC Educational Resources Information Center

    Van Name, Barry

    2012-01-01

    There is a battlefield where no quarter is given, no mercy shown, but not a single drop of blood is spilled. It is an arena that witnesses the bringing together of high-tech design and manufacture with the outpouring of brute force, under the remotely accessed command of some of today's brightest students. This is the world of battling robots, or…

  4. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  5. B* Probability Based Search

    DTIC Science & Technology

    1994-06-27

    success . The key ideas behind the algorithm are: 1. Stopping when one alternative is clearly better than all the others, and 2. Focusing the search on...search algorithm has been implemented on the chess machine Hitech . En route we have developed effective techniques for: "* Dealing with independence of...report describes the implementation, and the results of tests including games played against brute- force programs. The data indicate that B* Hitech is a

  6. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  7. Efficient critical design load case identification for floating offshore wind turbines with a reduced nonlinear model

    NASA Astrophysics Data System (ADS)

    Matha, Denis; Sandner, Frank; Schlipf, David

    2014-12-01

    Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.

  8. Selectivity trend of gas separation through nanoporous graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hongjun; Chen, Zhongfang; Dai, Sheng

    2015-04-15

    By means of molecular dynamics (MD) simulations, we demonstrate that porous graphene can efficiently separate gases according to their molecular sizes. The flux sequence from the classical MD simulation is H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4}, which generally follows the trend in the kinetic diameters. This trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO{sub 2}/N{sub 2} mixtures further demonstrate themore » separation capability of nanoporous graphene. - Graphical abstract: Classical molecular dynamics simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene, in excellent agreement with a recent experiment. - Highlights: • Classical MD simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene. • Free energy calculations yield permeation barriers for those gases. • Selectivities for several gas pairs are estimated from the free-energy barriers and the kinetic theory of gases. • The selectivity trend is in excellent agreement with a recent experiment.« less

  9. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  10. Dynamics of neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  11. Dynamics of neural cryptography.

    PubMed

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  12. Dynamics of neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-15

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less

  13. Free Energy Computations by Minimization of Kullback-Leibler Divergence: An Efficient Adaptive Biasing Potential Method for Sparse Representations

    DTIC Science & Technology

    2011-10-14

    landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on

  14. Influence of temperature fluctuations on infrared limb radiance: a new simulation code

    NASA Astrophysics Data System (ADS)

    Rialland, Valérie; Chervet, Patrick

    2006-08-01

    Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.

  15. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  16. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  17. Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca

    2017-06-01

    Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.

  18. Artificial consciousness and the consciousness-attention dissociation.

    PubMed

    Haladjian, Harry Haroutioun; Montemayor, Carlos

    2016-10-01

    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  20. From Coercion to Brute Force: Exploring the Evolution and Consequences of the Responsibility to Protect

    DTIC Science & Technology

    2016-05-26

    to Protect Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER MAJ Ashley E. Welte Se. TASK NUMBER Sf. WORK UNIT NUMBER...III, COL, IN Accepted this 26th day of May 2016 by: ___________________________________, Director, Graduate Degree Programs Robert F. Baumann, PhD The...copyright permission has been obtained for the inclusion of pictures, maps, graphics, and any other works incorporated into this manuscript. A work of the

  1. Brute force absorption contrast microtomography

    NASA Astrophysics Data System (ADS)

    Davis, Graham R.; Mills, David

    2014-09-01

    In laboratory X-ray microtomography (XMT) systems, the signal-to-noise ratio (SNR) is typically determined by the X-ray exposure due to the low flux associated with microfocus X-ray tubes. As the exposure time is increased, the SNR improves up to a point where other sources of variability dominate, such as differences in the sensitivities of adjacent X-ray detector elements. Linear time-delay integration (TDI) readout averages out detector sensitivities on the critical horizontal direction and equiangular TDI also averages out the X-ray field. This allows the SNR to be increased further with increasing exposure. This has been used in dentistry to great effect, allowing subtle variations in dentine mineralisation to be visualised in 3 dimensions. It has also been used to detect ink in ancient parchments that are too damaged to physically unroll. If sufficient contrast between the ink and parchment exists, it is possible to virtually unroll the tomographic image of the scroll in order that the text can be read. Following on from this work, a feasibility test was carried out to determine if it might be possible to recover images from decaying film reels. A successful attempt was made to re-create a short film sequence from a rolled length of 16mm film using XMT. However, the "brute force" method of scaling this up to allow an entire film reel to be imaged presents a significant challenge.

  2. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  3. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  4. Making Classical Ground State Spin Computing Fault-Tolerant

    DTIC Science & Technology

    2010-06-24

    approaches to perebor (brute-force searches) algorithms,” IEEE Annals of the History of Computing, 6, 384–400 (1984). [24] D. Bacon and S . T. Flammia ...Adiabatic gate teleportation,” Phys. Rev. Lett., 103, 120504 (2009). [25] D. Bacon and S . T. Flammia , “Adiabatic cluster state quantum computing...v1 [ co nd -m at . s ta t- m ec h] 2 2 Ju n 20 10 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the

  5. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  6. Fast equilibration protocol for million atom systems of highly entangled linear polyethylene chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sliozberg, Yelena R.; TKC Global, Inc., Aberdeen Proving Ground, Maryland 21005; Kröger, Martin

    Equilibrated systems of entangled polymer melts cannot be produced using direct brute force equilibration due to the slow reptation dynamics exhibited by high molecular weight chains. Instead, these dense systems are produced using computational techniques such as Monte Carlo-Molecular Dynamics hybrid algorithms, though the use of soft potentials has also shown promise mainly for coarse-grained polymeric systems. Through the use of soft-potentials, the melt can be equilibrated via molecular dynamics at intermediate and long length scales prior to switching to a Lennard-Jones potential. We will outline two different equilibration protocols, which use various degrees of information to produce the startingmore » configurations. In one protocol, we use only the equilibrium bond angle, bond length, and target density during the construction of the simulation cell, where the information is obtained from available experimental data and extracted from the force field without performing any prior simulation. In the second protocol, we moreover utilize the equilibrium radial distribution function and dihedral angle distribution. This information can be obtained from experimental data or from a simulation of short unentangled chains. Both methods can be used to prepare equilibrated and highly entangled systems, but the second protocol is much more computationally efficient. These systems can be strictly monodisperse or optionally polydisperse depending on the starting chain distribution. Our protocols, which utilize a soft-core harmonic potential, will be applied for the first time to equilibrate a million particle system of polyethylene chains consisting of 1000 united atoms at various temperatures. Calculations of structural and entanglement properties demonstrate that this method can be used as an alternative towards the generation of entangled equilibrium structures.« less

  7. Galaxy two-point covariance matrix estimation for next generation surveys

    NASA Astrophysics Data System (ADS)

    Howlett, Cullan; Percival, Will J.

    2017-12-01

    We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.

  8. Top-down constraints of regional emissions for KORUS-AQ 2016 field campaign

    NASA Astrophysics Data System (ADS)

    Bae, M.; Yoo, C.; Kim, H. C.; Kim, B. U.; Kim, S.

    2017-12-01

    Accurate estimations of emission rates form local and international sources are essential in regional air quality simulations, especially in assessing the relative contributions from international emission sources. While bottom-up constructions of emission inventories provide detailed information on specific emission types, they are limited to cover regions with rapid change of anthropogenic emissions (e.g. China) or regions without enough socioeconomic information (e.g. North Korea). We utilized space-borne monitoring of major pollutant precursors to construct a realistic emission inputs for chemistry transport models during the KORUS-AQ 2016 field campaign. Base simulation was conducted using WRF, SMOKE, and CMAQ modeling frame using CREATE 2015 (Asian countries) and CAPSS 2013 (South Korea) emissions inventories. NOx, SO2 and VOC model emissions are adjusted using the column density comparisons ratios (between modeled and observed NO2, SO2 and HCHO column densities) and emission-to-density conversion ratio (from model). Brute force perturbation method was used to separate contributions from North Korea, China and South Korea for flight pathways during the field campaign. Backward-Tracking Model Analyzer (BMA), based on NOAA HYSPLIT trajectory and dispersion model, are also utilized to track histories of chemical processes and emission source apportionment. CMAQ simulations were conducted over East Asia (27-km) and over South and North Korea (9-km) during KORUS-AQ campaign (1st May to 10th June 2016).

  9. TEAM: efficient two-locus epistasis tests in human genome-wide association study.

    PubMed

    Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei

    2010-06-15

    As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.

  10. Human problem solving performance in a fault diagnosis task

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1978-01-01

    It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.

  11. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  12. The Trailwatcher: A Collection of Colonel Mike Malone’s Writings

    DTIC Science & Technology

    1982-06-21

    washtub-sized turtle is boat Stand reaches but more brute force. the six eases its noose ’s head and neck. As the noose , the , short on... nebulous term for who would that?" I saw a functions: was constrain them to work on what to be down here won’t like range cards that any told me...the process never ceases. me on now our factor: mot ion. What motivates a of books that have been written on motivation handle on this nebulous term

  13. KSC00pp1574

    NASA Image and Video Library

    2000-09-21

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  14. KSC-00pp1574

    NASA Image and Video Library

    2000-09-21

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  15. On the Minimal Accuracy Required for Simulating Self-gravitating Systems by Means of Direct N-body Methods

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-01

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  16. Automatic Design of Digital Synthetic Gene Circuits

    PubMed Central

    Marchisio, Mario A.; Stelling, Jörg

    2011-01-01

    De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700

  17. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  18. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  19. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less

  20. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  1. Security enhanced BioEncoding for protecting iris codes

    NASA Astrophysics Data System (ADS)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  2. Image matching algorithms for breech face marks and firing pins in a database of spent cartridge cases of firearms.

    PubMed

    Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F

    2001-06-01

    On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.

  3. Impact-Actuated Digging Tool for Lunar Excavation

    NASA Technical Reports Server (NTRS)

    Wilson, Jak; Chu, Philip; Craft, Jack; Zacny, Kris; Santoro, Chris

    2013-01-01

    NASA s plans for a lunar outpost require extensive excavation. The Lunar Surface Systems Project Office projects that thousands of tons of lunar soil will need to be moved. Conventional excavators dig through soil by brute force, and depend upon their substantial weight to react to the forces generated. This approach will not be feasible on the Moon for two reasons: (1) gravity is 1/6th that on Earth, which means that a kg on the Moon will supply 1/6 the down force that it does on Earth, and (2) transportation costs (at the time of this reporting) of $50K to $100K per kg make massive excavators economically unattractive. A percussive excavation system was developed for use in vacuum or nearvacuum environments. It reduces the down force needed for excavation by an order of magnitude by using percussion to assist in soil penetration and digging. The novelty of this excavator is that it incorporates a percussive mechanism suited to sustained operation in a vacuum environment. A percussive digger breadboard was designed, built, and successfully tested under both ambient and vacuum conditions. The breadboard was run in vacuum to more than 2..times the lifetime of the Apollo Lunar Surface Drill, throughout which the mechanism performed and held up well. The percussive digger was demonstrated to reduce the force necessary for digging in lunar soil simulant by an order of magnitude, providing reductions as high as 45:1. This is an enabling technology for lunar site preparation and ISRU (In Situ Resource Utilization) mining activities. At transportation costs of $50K to $100K per kg, reducing digging forces by an order of magnitude translates into billions of dollars saved by not launching heavier systems to accomplish excavation tasks necessary to the establishment of a lunar outpost. Applications on the lunar surface include excavation for habitats, construction of roads, landing pads, berms, foundations, habitat shielding, and ISRU.

  4. Password Cracking Using Sony Playstations

    NASA Astrophysics Data System (ADS)

    Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet

    Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.

  5. DynaGuard: Armoring Canary-Based Protections against Brute-Force Attacks

    DTIC Science & Technology

    2015-12-11

    public domain. Non-exclusive copying or redistribution is...sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My SQ L Sl ow do w n (n...k 456 .hm me r 458 .sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My

  6. The new Mobile Command Center at KSC is important addition to emergency preparedness

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed '''The Brute,''' it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.

  7. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  8. Arm retraction dynamics of entangled star polymers: A forward flux sampling method study

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Likhtman, Alexei E.; Wang, Zuowei

    2017-07-01

    The study of dynamics and rheology of well-entangled branched polymers remains a challenge for computer simulations due to the exponentially growing terminal relaxation times of these polymers with increasing molecular weights. We present an efficient simulation algorithm for studying the arm retraction dynamics of entangled star polymers by combining the coarse-grained slip-spring (SS) model with the forward flux sampling (FFS) method. This algorithm is first applied to simulate symmetric star polymers in the absence of constraint release (CR). The reaction coordinate for the FFS method is determined by finding good agreement of the simulation results on the terminal relaxation times of mildly entangled stars with those obtained from direct shooting SS model simulations with the relative difference between them less than 5%. The FFS simulations are then carried out for strongly entangled stars with arm lengths up to 16 entanglements that are far beyond the accessibility of brute force simulations in the non-CR condition. Apart from the terminal relaxation times, the same method can also be applied to generate the relaxation spectra of all entanglements along the arms which are desired for the development of quantitative theories of entangled branched polymers. Furthermore, we propose a numerical route to construct the experimentally measurable relaxation correlation functions by effectively linking the data stored at each interface during the FFS runs. The obtained star arm end-to-end vector relaxation functions Φ (t ) and the stress relaxation function G(t) are found to be in reasonably good agreement with standard SS simulation results in the terminal regime. Finally, we demonstrate that this simulation method can be conveniently extended to study the arm-retraction problem in entangled star polymer melts with CR by modifying the definition of the reaction coordinate, while the computational efficiency will depend on the particular slip-spring or slip-link model employed.

  9. Evaluation of CMAQ and CAMx Ensemble Air Quality Forecasts during the 2015 MAPS-Seoul Field Campaign

    NASA Astrophysics Data System (ADS)

    Kim, E.; Kim, S.; Bae, C.; Kim, H. C.; Kim, B. U.

    2015-12-01

    The performance of Air quality forecasts during the 2015 MAPS-Seoul Field Campaign was evaluated. An forecast system has been operated to support the campaign's daily aircraft route decisions for airborne measurements to observe long-range transporting plume. We utilized two real-time ensemble systems based on the Weather Research and Forecasting (WRF)-Sparse Matrix Operator Kernel Emissions (SMOKE)-Comprehensive Air quality Model with extensions (CAMx) modeling framework and WRF-SMOKE- Community Multi_scale Air Quality (CMAQ) framework over northeastern Asia to simulate PM10 concentrations. Global Forecast System (GFS) from National Centers for Environmental Prediction (NCEP) was used to provide meteorological inputs for the forecasts. For an additional set of retrospective simulations, ERA Interim Reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) was also utilized to access forecast uncertainties from the meteorological data used. Model Inter-Comparison Study for Asia (MICS-Asia) and National Institute of Environment Research (NIER) Clean Air Policy Support System (CAPSS) emission inventories are used for foreign and domestic emissions, respectively. In the study, we evaluate the CMAQ and CAMx model performance during the campaign by comparing the results to the airborne and surface measurements. Contributions of foreign and domestic emissions are estimated using a brute force method. Analyses on model performance and emissions will be utilized to improve air quality forecasts for the upcoming KORUS-AQ field campaign planned in 2016.

  10. Efficiently mapping structure-property relationships of gas adsorption in porous materials: application to Xe adsorption.

    PubMed

    Kaija, A R; Wilmer, C E

    2017-09-08

    Designing better porous materials for gas storage or separations applications frequently leverages known structure-property relationships. Reliable structure-property relationships, however, only reveal themselves when adsorption data on many porous materials are aggregated and compared. Gathering enough data experimentally is prohibitively time consuming, and even approaches based on large-scale computer simulations face challenges. Brute force computational screening approaches that do not efficiently sample the space of porous materials may be ineffective when the number of possible materials is too large. Here we describe a general and efficient computational method for mapping structure-property spaces of porous materials that can be useful for adsorption related applications. We describe an algorithm that generates random porous "pseudomaterials", for which we calculate structural characteristics (e.g., surface area, pore size and void fraction) and also gas adsorption properties via molecular simulations. Here we chose to focus on void fraction and Xe adsorption at 1 bar, 5 bar, and 10 bar. The algorithm then identifies pseudomaterials with rare combinations of void fraction and Xe adsorption and mutates them to generate new pseudomaterials, thereby selectively adding data only to those parts of the structure-property map that are the least explored. Use of this method can help guide the design of new porous materials for gas storage and separations applications in the future.

  11. Molecular Dynamics Simulations of Protein-Ligand Complexes in Near Physiological Conditions

    NASA Astrophysics Data System (ADS)

    Wambo, Thierry Oscar

    Proteins are important molecules for their key functions. However, under certain circumstances, the function of these proteins needs to be regulated to keep us healthy. Ligands are small molecules often used to modulate the function of proteins. The binding affinity is a quantitative measure of how strong the ligand will modulate the function of the protein: a strong binding affinity will highly impact the performance of the protein. It becomes clear that it is critical to have appropriate techniques to accurately compute the binding affinity. The most difficult task in computer simulations is how to efficiently sample the space spanned by the ligand during the binding process. In this work, we have developed some schemes to compute the binding affinity of a ligand to a protein, and of a metal ion to a protein. Application of these techniques to some complexes yield results in agreement with experimental values. These methods are a brute force approach and make no assumption other than that the complexes are governed by the force field used. Specifically, we computed the free energy of binding between (1) human carbonic anhydrase II and the drug acetazolamide (hcaII-AZM), (2) human carbonic anhydrase II and the zinc ion (hcaII-Zinc), and (3) beta-lactoglobulin and five fatty acids complexes (BLG-FAs). We found the following free energies of binding in unit of kcal/mol: -12.96 +/-2.44 (-15.74) for hcaII-Zinc complex, -5.76+/-0.76 (-5.57) for BLG-OCA , -4.44+/-1.08 (-5.22) for BLG-DKA,-6.89+/-1.25 (-7.24) for BLG-DAO, -8.57+/-0.82 (-8.14) for BLG-MYR, -8.99+/-0.87 (-8.72) for BLG-PLM, and -11.87+/-1.8 (-10.8) for hcaII-AZM. The values inside the parentheses are experimental results. The simulations and quantitative analysis of each system provide interesting insights into the interactions between each entity and helps us to better understand the dynamics of these systems.

  12. "The Et Tu Brute Complex" Compulsive Self Betrayal

    ERIC Educational Resources Information Center

    Antus, Robert Lawrence

    2006-01-01

    In this article, the author discusses "The Et Tu Brute Complex." More specifically, this phenomenon occurs when a person, instead of supporting and befriending himself, orally condemns himself in front of other people and becomes his own worst enemy. This is a form of compulsive self-hatred. Most often, the victim of this complex is unaware of the…

  13. Artificial immune system algorithm in VLSI circuit configuration

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  14. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    NASA Astrophysics Data System (ADS)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  15. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less

  16. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  17. Accelerated Time-Domain Modeling of Electromagnetic Pulse Excitation of Finite-Length Dissipative Conductors over a Ground Plane via Function Fitting and Recursive Convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh

    In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less

  18. Allosteric effects of gold nanoparticles on human serum albumin.

    PubMed

    Shao, Qing; Hall, Carol K

    2017-01-07

    The ability of nanoparticles to alter protein structure and dynamics plays an important role in their medical and biological applications. We investigate allosteric effects of gold nanoparticles on human serum albumin protein using molecular simulations. The extent to which bound nanoparticles influence the structure and dynamics of residues distant from the binding site is analyzed. The root mean square deviation, root mean square fluctuation and variation in the secondary structure of individual residues on a human serum albumin protein are calculated for four protein-gold nanoparticle binding complexes. The complexes are identified in a brute-force search process using an implicit-solvent coarse-grained model for proteins and nanoparticles. They are then converted to atomic resolution and their structural and dynamic properties are investigated using explicit-solvent atomistic molecular dynamics simulations. The results show that even though the albumin protein remains in a folded structure, the presence of a gold nanoparticle can cause more than 50% of the residues to decrease their flexibility significantly, and approximately 10% of the residues to change their secondary structure. These affected residues are distributed on the whole protein, even on regions that are distant from the nanoparticle. We analyze the changes in structure and flexibility of amino acid residues on a variety of binding sites on albumin and confirm that nanoparticles could allosterically affect the ability of albumin to bind fatty acids, thyroxin and metals. Our simulations suggest that allosteric effects must be considered when designing and deploying nanoparticles in medical and biological applications that depend on protein-nanoparticle interactions.

  19. The new Mobile Command Center at KSC is important addition to emergency preparedness

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed '''The Brute,''' serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.

  20. A nonperturbative approximation for the moderate Reynolds number Navier–Stokes equations

    PubMed Central

    Roper, Marcus; Brenner, Michael P.

    2009-01-01

    The nonlinearity of the Navier–Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier–Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes. PMID:19211800

  1. A nonperturbative approximation for the moderate Reynolds number Navier-Stokes equations.

    PubMed

    Roper, Marcus; Brenner, Michael P

    2009-03-03

    The nonlinearity of the Navier-Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier-Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes.

  2. KSC-00pp1572

    NASA Image and Video Library

    2000-09-21

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  3. KSC00pp1572

    NASA Image and Video Library

    2000-09-21

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  4. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well. The robustness of the algorithm with respect to the only free parameter it involves is ascertained. The method is then applied to perform tensile tests on gold nanopillars on strain rates as low as 100/s, bringing out the perils of high strain-rate molecular dynamics calculations. We also calculate temperature and stress dependence of activation free energy for surface nucleation of dislocations in pristine gold nanopillars under realistic loads. While maintaining fully atomistic resolution, we reach the fraction-of-a-second time scale regime. It is found that the activation free energy depends significantly and nonlinearly on the driving force (stress or strain) and temperature, leading to very high activation entropies for surface dislocation nucleation.

  5. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  6. Develop a solution for protecting and securing enterprise networks from malicious attacks

    NASA Astrophysics Data System (ADS)

    Kamuru, Harshitha; Nijim, Mais

    2014-05-01

    In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be gathered. This information gathering typically takes the form of various types of scans to determine available hosts, networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis. We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new solution for securing enterprise networks will be developed here.

  7. A Modern Picture of Barred Galaxy Dynamics

    NASA Astrophysics Data System (ADS)

    Petersen, Michael; Weinberg, Martin; Katz, Neal

    2018-01-01

    Observations of disk galaxies suggest that bars are responsible for altering global galaxy parameters (e.g. structures, gas fraction, star formation rate). The canonical understanding of the mechanisms underpinning bar-driven secular dynamics in disk galaxies has been largely built upon the analysis of linear theory, despite galactic bars being clearly demonstrated to be nonlinear phenomena in n-body simulations. We present simulations of barred Milky Way-like galaxy models designed to elucidate nonlinear barred galaxy dynamics. We have developed two new methodologies for analyzing n-body simulations that give the best of both powerful analytic linear theory and brute force simulation analysis: orbit family identification and multicomponent torque analysis. The software will be offered publicly to the community for their own simulation analysis.The orbit classifier reveals that the details of kinematic components in galactic disks (e.g. the bar, bulge, thin disk, and thick disk components) are powerful discriminators of evolutionary paradigms (i.e. violent instabilities and secular evolution) as well as the basic parameters of the dark matter halo (mass distribution, angular momentum distribution). Multicomponent torque analysis provides a thorough accounting of the transfer of angular momentum between orbits, global patterns, and distinct components in order to better explain the underlying physics which govern the secular evolution of barred disk galaxies.Using these methodologies, we are able to identify the successes and failures of linear theory and traditional n-body simulations en route to a detailed understanding of the control bars exhibit over secular evolution in galaxies. We present explanations for observed physical and velocity structures in observations of barred galaxies alongside predictions for how structures will vary with dynamical properties from galaxy to galaxy as well as over the lifetime of a galaxy, finding that the transfer of angular momentum through previously unidentified channels can more fully explain the observed dynamics.

  8. Adaptive photoacoustic imaging quality optimization with EMD and reconstruction

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.

    2016-10-01

    Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.

  9. Are Individuals Luck Egalitarians? – An Experiment on the Influence of Brute and Option Luck on Social Preferences

    PubMed Central

    Tinghög, Gustav; Andersson, David; Västfjäll, Daniel

    2017-01-01

    According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals’ deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals’ fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals (n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent (p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth. PMID:28424641

  10. Are Individuals Luck Egalitarians? - An Experiment on the Influence of Brute and Option Luck on Social Preferences.

    PubMed

    Tinghög, Gustav; Andersson, David; Västfjäll, Daniel

    2017-01-01

    According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals' deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals' fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals ( n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent ( p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth.

  11. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces

  12. A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere

    NASA Astrophysics Data System (ADS)

    Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael

    2017-05-01

    We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.

  13. I Hear You Eat and Speak: Automatic Recognition of Eating Condition and Food Type, Use-Cases, and Impact on ASR Performance

    PubMed Central

    Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn

    2016-01-01

    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486

  14. Toward an optimal online checkpoint solution under a two-level HPC checkpoint model

    DOE PAGES

    Di, Sheng; Robert, Yves; Vivien, Frederic; ...

    2016-03-29

    The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less

  15. Morphodynamic data assimilation used to understand changing coasts

    USGS Publications Warehouse

    Plant, Nathaniel G.; Long, Joseph W.

    2015-01-01

    Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.

  16. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  17. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  18. A Formal Algorithm for Routing Traces on a Printed Circuit Board

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    1996-01-01

    This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.

  19. Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc

    Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.

  20. From "brute" to "thug:" the demonization and criminalization of unarmed Black male victims in America.

    PubMed

    Smiley, CalvinJohn; Fakunle, David

    The synonymy of Blackness with criminality is not a new phenomenon in America. Documented historical accounts have shown how myths, stereotypes, and racist ideologies led to discriminatory policies and court rulings that fueled racial violence in a post-Reconstruction era and has culminated in the exponential increase of Black male incarceration today. Misconceptions and prejudices manufactured and disseminated through various channels such as the media included references to a " brute " image of Black males. In the 21 st century, this negative imagery of Black males has frequently utilized the negative connotation of the terminology " thug ." In recent years, law enforcement agencies have unreasonably used deadly force on Black males allegedly considered to be "suspects" or "persons of interest." The exploitation of these often-targeted victims' criminal records, physical appearances, or misperceived attributes has been used to justify their unlawful deaths. Despite the connection between disproportionate criminality and Black masculinity, little research has been done on how unarmed Black male victims, particularly but not exclusively at the hands of law enforcement, have been posthumously criminalized. This paper investigates the historical criminalization of Black males and its connection to contemporary unarmed victims of law enforcement. Action research methodology in the data collection process is utilized to interpret how Black male victims are portrayed by traditional mass media, particularly through the use of language, in ways that marginalize and de-victimize these individuals. This study also aims to elucidate a contemporary understanding of race relations, racism, and the plight of the Black male in a 21-century "post-racial" America.

  1. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  2. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less

  3. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  4. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    PubMed

    Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M

    2013-01-01

    This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  5. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  6. Hierarchical Material Properties in Finite Element Analysis: The Oilfield Infrastructure Problem.

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Wilson, G. A.

    2017-12-01

    Geophysical simulation of low-frequency electromagnetic signals within built environments such as urban centers and industrial landscapes facilities is a challenging computational problem because strong conductors (e.g., pipes, fences, rail lines, rebar, etc.) are not only highly conductive and/or magnetic relative to the surrounding geology, but they are very small in one or more of their physical length coordinates. Realistic modeling of such structures as idealized conductors has long been the standard approach; however this strategy carries with it computational burdens such as cumbersome implementation of internal boundary conditions, and limited flexibility for accommodating realistic geometries. Another standard approach is "brute force" discretization (often coupled with an equivalent medium model) whereby 100's of millions of voxels are used to represent these strong conductors, but at the cost of extreme computation times (and mesh design) for a simulation result when possible. To minimize these burdens, a new finite element scheme (Weiss, Geophysics, 2017) has been developed in which the material properties reside on a hierarchy of geometric simplicies (i.e., edges, facets and volumes) within an unstructured tetrahedral mesh. This allows thin sheet—like structures, such as subsurface fractures, to be economically represented by a connected set of triangular facets, for example, that freely conform to arbitrary "real world" geometries. The same holds thin pipe/wire-like structures, such as casings or pipelines. The hierarchical finite element scheme has been applied to problems in electro- and magnetostatics for oilfield problems where the elevated, but finite, conductivity and permeability of the steel-cased oil wells must be properly accounted for, yielding results that are otherwise unobtainable, with run times as low as a few 10s of seconds. Extension of the hierarchical finite element concept to broadband electromagnetics is presently underway, as are its implications for geophysical inversion.

  7. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE PAGES

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...

    2018-06-08

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  8. Low-field thermal mixing in [1-(13)C] pyruvic acid for brute-force hyperpolarization.

    PubMed

    Peat, David T; Hirsch, Matthew L; Gadian, David G; Horsewill, Anthony J; Owers-Bradley, John R; Kempf, James G

    2016-07-28

    We detail the process of low-field thermal mixing (LFTM) between (1)H and (13)C nuclei in neat [1-(13)C] pyruvic acid at cryogenic temperatures (4-15 K). Using fast-field-cycling NMR, (1)H nuclei in the molecule were polarized at modest high field (2 T) and then equilibrated with (13)C nuclei by fast cycling (∼300-400 ms) to a low field (0-300 G) that activates thermal mixing. The (13)C NMR spectrum was recorded after fast cycling back to 2 T. The (13)C signal derives from (1)H polarization via LFTM, in which the polarized ('cold') proton bath contacts the unpolarised ('hot') (13)C bath at a field so low that Zeeman and dipolar interactions are similar-sized and fluctuations in the latter drive (1)H-(13)C equilibration. By varying mixing time (tmix) and field (Bmix), we determined field-dependent rates of polarization transfer (1/τ) and decay (1/T1m) during mixing. This defines conditions for effective mixing, as utilized in 'brute-force' hyperpolarization of low-γ nuclei like (13)C using Boltzmann polarization from nearby protons. For neat pyruvic acid, near-optimum mixing occurs for tmix∼ 100-300 ms and Bmix∼ 30-60 G. Three forms of frozen neat pyruvic acid were tested: two glassy samples, (one well-deoxygenated, the other O2-exposed) and one sample pre-treated by annealing (also well-deoxygenated). Both annealing and the presence of O2 are known to dramatically alter high-field longitudinal relaxation (T1) of (1)H and (13)C (up to 10(2)-10(3)-fold effects). Here, we found smaller, but still critical factors of ∼(2-5)× on both τ and T1m. Annealed, well-deoxygenated samples exhibit the longest time constants, e.g., τ∼ 30-70 ms and T1m∼ 1-20 s, each growing vs. Bmix. Mixing 'turns off' for Bmix > ∼100 G. That T1m≫τ is consistent with earlier success with polarization transfer from (1)H to (13)C by LFTM.

  9. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  10. Faint Debris Detection by Particle Based Track-Before-Detect Method

    NASA Astrophysics Data System (ADS)

    Uetsuhara, M.; Ikoma, N.

    2014-09-01

    This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.

  11. Numerical simulations for the sources apportionment and control strategies of PM2.5 over Pearl River Delta, China, part I: Inventory and PM2.5 sources apportionment.

    PubMed

    Huang, Yeqi; Deng, Tao; Li, Zhenning; Wang, Nan; Yin, Chanqin; Wang, Shiqiang; Fan, Shaojia

    2018-09-01

    This article uses the WRF-CMAQ model to systematically study the source apportionment of PM 2.5 under typical meteorological conditions in the dry season (November 2010) in the Pearl River Delta (PRD). According to the geographical location and the relative magnitude of pollutant emission, Guangdong Province is divided into eight subdomains for source apportionment study. The Brute-Force Method (BFM) method was implemented to simulate the contribution from different regions to the PM 2.5 pollution in the PRD. Results show that the industrial sources accounted for the largest proportion. For emission species, the total amount of NO x and VOC in Guangdong Province, and NH 3 and VOC in Hunan Province are relatively larger. In Guangdong Province, the emission of SO 2 , NO x and VOC in the PRD are relatively larger, and the NH 3 emissions are higher outside the PRD. In northerly-controlled episodes, model simulations demonstrate that local emissions are important for PM 2.5 pollution in Guangzhou and Foshan. Meanwhile, emissions from Dongguan and Huizhou (DH), and out of Guangdong Province (SW) are important contributors for PM 2.5 pollution in Guangzhou. For PM 2.5 pollution in Foshan, emissions in Guangzhou and DH are the major contributors. In addition, high contribution ratio from DH only occurs in severe pollution periods. In southerly-controlled episode, contribution from the southern PRD increases. Local emissions and emissions from Shenzhen, DH, Zhuhai-Jiangmen-Zhongshan (ZJZ) are the major contributors. Regional contribution to the chemical compositions of PM 2.5 indicates that the sources of chemical components are similar to those of PM 2.5 . In particular, SO 4 2- is mainly sourced from emissions out of Guangdong Province, while the NO 3- and NH 4+ are more linked to agricultural emissions. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Binding Modes of Ligands Using Enhanced Sampling (BLUES): Rapid Decorrelation of Ligand Binding Modes via Nonequilibrium Candidate Monte Carlo.

    PubMed

    Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L

    2018-05-31

    Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.

  13. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  14. Exploration of Multi-State Conformational Dynamics and Underlying Global Functional Landscape of Maltose Binding Protein

    PubMed Central

    Wang, Yong; Tang, Chun; Wang, Erkang; Wang, Jin

    2012-01-01

    An increasing number of biological machines have been revealed to have more than two macroscopic states. Quantifying the underlying multiple-basin functional landscape is essential for understanding their functions. However, the present models seem to be insufficient to describe such multiple-state systems. To meet this challenge, we have developed a coarse grained triple-basin structure-based model with implicit ligand. Based on our model, the constructed functional landscape is sufficiently sampled by the brute-force molecular dynamics simulation. We explored maltose-binding protein (MBP) which undergoes large-scale domain motion between open, apo-closed (partially closed) and holo-closed (fully closed) states responding to ligand binding. We revealed an underlying mechanism whereby major induced fit and minor population shift pathways co-exist by quantitative flux analysis. We found that the hinge regions play an important role in the functional dynamics as well as that increases in its flexibility promote population shifts. This finding provides a theoretical explanation of the mechanistic discrepancies in PBP protein family. We also found a functional “backtracking” behavior that favors conformational change. We further explored the underlying folding landscape in response to ligand binding. Consistent with earlier experimental findings, the presence of ligand increases the cooperativity and stability of MBP. This work provides the first study to explore the folding dynamics and functional dynamics under the same theoretical framework using our triple-basin functional model. PMID:22532792

  15. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  16. A new class of enhanced kinetic sampling methods for building Markov state models

    NASA Astrophysics Data System (ADS)

    Bhoutekar, Arti; Ghosh, Susmita; Bhattacharya, Swati; Chatterjee, Abhijit

    2017-10-01

    Markov state models (MSMs) and other related kinetic network models are frequently used to study the long-timescale dynamical behavior of biomolecular and materials systems. MSMs are often constructed bottom-up using brute-force molecular dynamics (MD) simulations when the model contains a large number of states and kinetic pathways that are not known a priori. However, the resulting network generally encompasses only parts of the configurational space, and regardless of any additional MD performed, several states and pathways will still remain missing. This implies that the duration for which the MSM can faithfully capture the true dynamics, which we term as the validity time for the MSM, is always finite and unfortunately much shorter than the MD time invested to construct the model. A general framework that relates the kinetic uncertainty in the model to the validity time, missing states and pathways, network topology, and statistical sampling is presented. Performing additional calculations for frequently-sampled states/pathways may not alter the MSM validity time. A new class of enhanced kinetic sampling techniques is introduced that aims at targeting rare states/pathways that contribute most to the uncertainty so that the validity time is boosted in an effective manner. Examples including straightforward 1D energy landscapes, lattice models, and biomolecular systems are provided to illustrate the application of the method. Developments presented here will be of interest to the kinetic Monte Carlo community as well.

  17. Adaptive Annealed Importance Sampling for Multimodal Posterior Exploration and Model Selection with Application to Extrasolar Planet Detection

    NASA Astrophysics Data System (ADS)

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  18. Cost-Effective Encryption-Based Autonomous Routing Protocol for Efficient and Secure Wireless Sensor Networks.

    PubMed

    Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A; Al-Muhtadi, Jalal; Rodrigues, Joel J P C; Khalil, Mohammed Sayim; Ali Ahmed, Adel

    2016-03-31

    The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks.

  19. Cost-Effective Encryption-Based Autonomous Routing Protocol for Efficient and Secure Wireless Sensor Networks

    PubMed Central

    Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A.; Al-Muhtadi, Jalal; Rodrigues, Joel J. P. C.; Khalil, Mohammed Sayim; Ali Ahmed, Adel

    2016-01-01

    The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks. PMID:27043572

  20. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  1. Ab Initio Effective Rovibrational Hamiltonians for Non-Rigid Molecules via Curvilinear VMP2

    NASA Astrophysics Data System (ADS)

    Changala, Bryan; Baraban, Joshua H.

    2017-06-01

    Accurate predictions of spectroscopic constants for non-rigid molecules are particularly challenging for ab initio theory. For all but the smallest systems, ``brute force'' diagonalization of the full rovibrational Hamiltonian is computationally prohibitive, leaving us at the mercy of perturbative approaches. However, standard perturbative techniques, such as second order vibrational perturbation theory (VPT2), are based on the approximation that a molecule makes small amplitude vibrations about a well defined equilibrium structure. Such assumptions are physically inappropriate for non-rigid systems. In this talk, we will describe extensions to curvilinear vibrational Møller-Plesset perturbation theory (VMP2) that account for rotational and rovibrational effects in the molecular Hamiltonian. Through several examples, we will show that this approach provides predictions to nearly microwave accuracy of molecular constants including rotational and centrifugal distortion parameters, Coriolis coupling constants, and anharmonic vibrational and tunneling frequencies.

  2. Decision and function problems based on boson sampling

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Georgios M.; Brougham, Thomas

    2016-07-01

    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.

  3. An investigation of school violence through Turkish children's drawings.

    PubMed

    Yurtal, Filiz; Artut, Kazim

    2010-01-01

    This study investigates Turkish children's perception of violence in school as represented through drawings and narratives. In all, 66 students (12 to 13 years old) from the middle socioeconomic class participated. To elicit children's perception of violence, they were asked to draw a picture of a violent incident they had heard, experienced, or witnessed. Children mostly drew pictures of violent events among children (33 pictures). Also, there were pictures of violent incidents perpetrated by teachers and directors against children. It was observed that violence influenced children. Violence was mostly depicted in school gardens (38 pictures), but there were violent incidents everywhere, such as in classrooms, corridors, and school stores as well. Moreover, it was found that brute force was the most referred way of violence in the children's depictions (38 pictures). In conclusion, children clearly indicated that there was violence in schools and they were affected by it.

  4. Advances in atmospheric light scattering theory and remote-sensing techniques

    NASA Astrophysics Data System (ADS)

    Videen, Gorden; Sun, Wenbo; Gong, Wei

    2017-02-01

    This issue focuses especially on characterizing particles in the Earth-atmosphere system. The significant role of aerosol particles in this system was recognized in the mid-1970s [1]. Since that time, our appreciation for the role they play has only increased. It has been and continues to be one of the greatest unknown factors in the Earth-atmosphere system as evidenced by the most recent Intergovernmental Panel on Climate Change (IPCC) assessments [2]. With increased computational capabilities, in terms of both advanced algorithms and in brute-force computational power, more researchers have the tools available to address different aspects of the role of aerosols in the atmosphere. In this issue, we focus on recent advances in this topical area, especially the role of light scattering and remote sensing. This issue follows on the heels of four previous topical issues on this subject matter that have graced the pages of this journal [3-6].

  5. Competitive code-based fast palmprint identification using a set of cover trees

    NASA Astrophysics Data System (ADS)

    Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan

    2009-06-01

    A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.

  6. Aspects of warped AdS3/CFT2 correspondence

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang

    2013-04-01

    In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.

  7. Remote-sensing image encryption in hybrid domains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong

    2012-04-01

    Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.

  8. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  9. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  10. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  11. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  12. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator

    NASA Astrophysics Data System (ADS)

    Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît

    2018-01-01

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  13. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator.

    PubMed

    Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît

    2018-01-07

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  14. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  15. Transport and imaging of brute-force 13C hyperpolarization

    NASA Astrophysics Data System (ADS)

    Hirsch, Matthew L.; Smith, Bryce A.; Mattingly, Mark; Goloshevsky, Artem G.; Rosay, Melanie; Kempf, James G.

    2015-12-01

    We demonstrate transport of hyperpolarized frozen 1-13C pyruvic acid from its site of production to a nearby facility, where a time series of 13C images was acquired from the aqueous dissolution product. Transportability is tied to the hyperpolarization (HP) method we employ, which omits radical electron species used in other approaches that would otherwise relax away the HP before reaching the imaging center. In particular, we attained 13C HP by 'brute-force', i.e., using only low temperature and high-field (e.g., T < ∼2 K and B ∼ 14 T) to pre-polarize protons to a large Boltzmann value (∼0.4% 1H polarization). After polarizing the neat, frozen sample, ejection quickly (<1 s) passed it through a low field (B < 100 G) to establish the 1H pre-polarization spin temperature on 13C via the process known as low-field thermal mixing (yielding ∼0.1% 13C polarization). By avoiding polarization agents (a.k.a. relaxation agents) that are needed to hyperpolarize by the competing method of dissolution dynamic nuclear polarization (d-DNP), the 13C relaxation time was sufficient to transport the sample for ∼10 min before finally dissolving in warm water and obtaining a 13C image of the hyperpolarized, dilute, aqueous product (∼0.01% 13C polarization, a >100-fold gain over thermal signals in the 1 T scanner). An annealing step, prior to polarizing the sample, was also key for increasing T1 ∼ 30-fold during transport. In that time, HP was maintained using only modest cryogenics and field (T ∼ 60 K and B = 1.3 T), for T1(13C) near 5 min. Much greater time and distance (with much smaller losses) may be covered using more-complete annealing and only slight improvements on transport conditions (e.g., yielding T1 ∼ 5 h at 30 K, 2 T), whereas even intercity transfer is possible (T1 > 20 h) at reasonable conditions of 6 K and 2 T. Finally, it is possible to increase the overall enhancement near d-DNP levels (i.e., 102-fold more) by polarizing below 100 mK, where nanoparticle agents are known to hasten T1 buildup by 100-fold, and to yield very little impact on T1 losses at temperatures relevant to transport.

  16. Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.

    2011-12-01

    Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.

  17. Fractional Progress Toward Understanding the Fractional Diffusion Limit: The Electromagnetic Response of Spatially Correlated Geomaterials

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Beskardes, G. D.; Everett, M. E.

    2016-12-01

    In this presentation we review the observational evidence for anomalous electromagnetic diffusion in near-surface geophysical exploration and how such evidence is consistent with a detailed, spatially-correlated geologic medium. To date, the inference of multi-scale geologic correlation is drawn from two independent methods of data analysis. The first of which is analogous to seismic move-out, where the arrival time of an electromagnetic pulse is plotted as a function of transmitter/receiver separation. The "anomalous" diffusion is evident by the fractional-order power law behavior of these arrival times, with an exponent value between unity (pure diffusion) and 2 (lossless wave propagation). The second line of evidence comes from spectral analysis of small-scale fluctuations in electromagnetic profile data which cannot be explained in terms of instrument, user or random error. Rather, the power-law behavior of the spectral content of these signals (i.e., power versus wavenumber) and their increments reveals them to lie in a class of signals with correlations over multiple length scales, a class of signals known formally as fractional Brownian motion. Numerical results over simulated geology with correlated electrical texture - representative of, for example, fractures, sedimentary bedding or metamorphic lineation - are consistent with the (albeit limited, but growing) observational data, suggesting a possible mechanism and modeling approach for a more realistic geology. Furthermore, we show how similar simulated results can arise from a modeling approach where geologic texture is economically captured by a modified diffusion equation containing exotic, but manageable, fractional derivatives. These derivatives arise physically from the generalized convolutional form for the electromagnetic constitutive laws and thus have merit beyond mere mathematical convenience. In short, we are zeroing in on the anomalous, fractional diffusion limit from two converging directions: a zooming down of the macroscopic (fractional derivative) view; and, a heuristic homogenization of the atomistic (brute force discretization) view.

  18. Estimation of atmospheric turbidity and surface radiative parameters using broadband clear sky solar irradiance models in Rio de Janeiro-Brasil

    NASA Astrophysics Data System (ADS)

    Flores, José L.; Karam, Hugo A.; Marques Filho, Edson P.; Pereira Filho, Augusto J.

    2016-02-01

    The main goal of this paper is to estimate a set of optimal seasonal, daily, and hourly values of atmospheric turbidity and surface radiative parameters Ångström's turbidity coefficient ( β), Ångström's wavelength exponent ( α), aerosol single scattering albedo ( ω o ), forward scatterance ( F c ) and average surface albedo ( ρ g ), using the Brute Force multidimensional minimization method to minimize the difference between measured and simulated solar irradiance components, expressed as cost functions. In order to simulate the components of short-wave solar irradiance (direct, diffuse and global) for clear sky conditions, incidents on a horizontal surface in the Metropolitan Area of Rio de Janeiro (MARJ), Brazil (22° 51' 27″ S, 43° 13' 58″ W), we use two parameterized broadband solar irradiance models, called CPCR2 and Iqbal C, based on synoptic information. The meteorological variables such as precipitable water ( u w ) and ozone concentration ( u o ) required by the broadband solar models were obtained from moderate-resolution imaging spectroradiometer (MODIS) sensor on Terra and Aqua NASA platforms. For the implementation and validation processes, we use global and diffuse solar irradiance data measured by the radiometric platform of LabMiM, located in the north area of the MARJ. The data were measured between the years 2010 and 2012 at 1-min intervals. The performance of solar irradiance models using optimal parameters was evaluated with several quantitative statistical indicators and a subset of measured solar irradiance data. Some daily results for Ångström's wavelength exponent α were compared with Ångström's parameter (440-870 nm) values obtained by aerosol robotic network (AERONET) for 11 days, showing an acceptable level of agreement. Results for Ångström's turbidity coefficient β, associated with the amount of aerosols in the atmosphere, show a seasonal pattern according with increased precipitation during summer months (December-February) in the MARJ.

  19. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    PubMed

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10 25 or 10 26 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10 25 . Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10 28 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Identification of biased sectors in emission data using a combination of chemical transport model and receptor model

    NASA Astrophysics Data System (ADS)

    Uranishi, Katsushige; Ikemori, Fumikazu; Nakatsubo, Ryohei; Shimadera, Hikari; Kondo, Akira; Kikutani, Yuki; Asano, Katsuyoshi; Sugata, Seiji

    2017-10-01

    This study presented a comparison approach with multiple source apportionment methods to identify which sectors of emission data have large biases. The source apportionment methods for the comparison approach included both receptor and chemical transport models, which are widely used to quantify the impacts of emission sources on fine particulate matter of less than 2.5 μm in diameter (PM2.5). We used daily chemical component concentration data in the year 2013, including data for water-soluble ions, elements, and carbonaceous species of PM2.5 at 11 sites in the Kinki-Tokai district in Japan in order to apply the Positive Matrix Factorization (PMF) model for the source apportionment. Seven PMF factors of PM2.5 were identified with the temporal and spatial variation patterns and also retained features of the sites. These factors comprised two types of secondary sulfate, road transportation, heavy oil combustion by ships, biomass burning, secondary nitrate, and soil and industrial dust, accounting for 46%, 17%, 7%, 14%, 13%, and 3% of the PM2.5, respectively. The multiple-site data enabled a comprehensive identification of the PM2.5 sources. For the same period, source contributions were estimated by air quality simulations using the Community Multiscale Air Quality model (CMAQ) with the brute-force method (BFM) for four source categories. Both models provided consistent results for the following three of the four source categories: secondary sulfates, road transportation, and heavy oil combustion sources. For these three target categories, the models' agreement was supported by the small differences and high correlations between the CMAQ/BFM- and PMF-estimated source contributions to the concentrations of PM2.5, SO42-, and EC. In contrast, contributions of the biomass burning sources apportioned by CMAQ/BFM were much lower than and little correlated with those captured by the PMF model, indicating large uncertainties in the biomass burning emissions used in the CMAQ simulations. Thus, this comparison approach using the two antithetical models enables us to identify which sectors of emission data have large biases for improvement of future air quality simulations.

  1. Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres

    NASA Astrophysics Data System (ADS)

    Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.

    2016-12-01

    Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.

  2. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bin, E-mail: bins@ieee.org

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less

  3. A parameter-free method to extract the superconductor’s J c(B,θ) field-dependence from in-field current-voltage characteristics of high temperature superconductor tapes

    NASA Astrophysics Data System (ADS)

    Zermeño, Víctor M. R.; Habelok, Krzysztof; Stępień, Mariusz; Grilli, Francesco

    2017-03-01

    The estimation of the critical current (I c) and AC losses of high-temperature superconductor devices through modeling and simulation requires the knowledge of the critical current density (J c) of the superconducting material. This J c is in general not constant and depends both on the magnitude (B loc) and the direction (θ, relative to the tape) of the local magnetic flux density. In principle, J c(B loc,θ) can be obtained from the experimentally measured critical current I c(B a,θ), where B a is the magnitude of the applied magnetic field. However, for applications where the superconducting materials experience a local field that is close to the self-field of an isolated conductor, obtaining J c(B loc,θ) from I c(B a,θ) is not a trivial task. It is necessary to solve an inverse problem to correct for the contribution derived from the self-field. The methods presented in the literature comprise a series of approaches dealing with different degrees of mathematical regularization to fit the parameters of preconceived nonlinear formulas by means of brute force or optimization methods. In this contribution, we present a parameter-free method that provides excellent reproduction of experimental data and requires no human interaction or preconception of the J c dependence with respect to the magnetic field. In particular, it allows going from the experimental data to a ready-to-run J c(B loc,θ) model in a few minutes.

  4. Be2D: A model to understand the distribution of meteoric 10Be in soilscapes

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Vanacker, Veerle; Vanderborght, Jan; Govers, Gerard

    2016-04-01

    Cosmogenic nuclides have revolutionised our understanding of earth surface process rates. They have become one of the standard tools to quantify soil production by weathering, soil redistribution and erosion. Especially Beryllium-10 has gained much attention due to its long half-live and propensity to be relatively conservative in the landscape. The latter makes 10Be an excellent tool to assess denudation rates over the last 1000 to 100 × 103 years, bridging the anthropogenic and geological time scale. Nevertheless, the mobility of meteoric 10Be in soil systems makes translation of meteoric 10Be inventories into erosion and deposition rates difficult. Here we present a coupled soil hillslope model, Be2D, that is applied to synthetic and real topography to address the following three research questions. (i) What is the influence of vertical meteoric Be10 mobility, caused by chemical mobility, clay translocation and bioturbation, on its lateral redistribution over the soilscape, (ii) How does vertical mobility influence erosion rates and soil residence times inferred from meteoric 10Be inventories and (iii) To what extent can a tracer with a half-life of 1.36 Myr be used to distinguish between natural and human-disturbed soil redistribution rates? The model architecture of Be2D is designed to answer these research questions. Be2D is a dynamic model including physical processes such as soil formation, physical weathering, clay migration, bioturbation, creep, overland flow and tillage erosion. Pathways of meteoric 10Be mobility are simulated using a two step approach which is updated each timestep. First, advective and diffusive mobility of meteoric 10Be is simulated within the soil profile and second, lateral redistribution because of lateral soil fluxes is calculated. The performance and functionality of the model is demonstrated through a number of synthetic and real model runs using existing datasets of meteoric 10Be from case-studies in southeastern US. Brute force optimisation allows reliably parameter constraining, resulting in a good agreement between simulated and observed meteoric 10Be concentrations and inventories. Our simulations suggest that meteoric 10Be can be used as a tracer to unravel human impact on soil fluxes when soils have a high affinity to sorb meteoric 10Be.

  5. A chaotic cryptosystem for images based on Henon and Arnold cat map.

    PubMed

    Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.

  6. Method to measure efficiently rare fluctuations of turbulence intensity for turbulent-laminar transitions in pipe flows

    NASA Astrophysics Data System (ADS)

    Nemoto, Takahiro; Alexakis, Alexandros

    2018-02-01

    The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.

  7. Diagnosing the decline in pharmaceutical R&D efficiency.

    PubMed

    Scannell, Jack W; Blanckley, Alex; Boldon, Helen; Warrington, Brian

    2012-03-01

    The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.

  8. Verification Test of Automated Robotic Assembly of Space Truss Structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1995-01-01

    A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.

  9. Development and verification testing of automation and robotics for assembly of space structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1993-01-01

    A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.

  10. Three recipes for improving the image quality with optical long-baseline interferometers: BFMC, LFF, and DPSC

    NASA Astrophysics Data System (ADS)

    Millour, Florentin A.; Vannier, Martin; Meilland, Anthony

    2012-07-01

    We present here three recipes for getting better images with optical interferometers. Two of them, Low- Frequencies Filling and Brute-Force Monte Carlo were used in our participation to the Interferometry Beauty Contest this year and can be applied to classical imaging using V2 and closure phases. These two addition to image reconstruction provide a way of having more reliable images. The last recipe is similar in its principle as the self-calibration technique used in radio-interferometry. We call it also self-calibration, but it uses the wavelength-differential phase as a proxy of the object phase to build-up a full-featured complex visibility set of the observed object. This technique needs a first image-reconstruction run with an available software, using closure-phases and squared visibilities only. We used it for two scientific papers with great success. We discuss here the pros and cons of such imaging technique.

  11. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  12. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  13. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  14. A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map

    PubMed Central

    Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724

  15. g_contacts: Fast contact search in bio-molecular ensemble data

    NASA Astrophysics Data System (ADS)

    Blau, Christian; Grubmuller, Helmut

    2013-12-01

    Short-range interatomic interactions govern many bio-molecular processes. Therefore, identifying close interaction partners in ensemble data is an essential task in structural biology and computational biophysics. A contact search can be cast as a typical range search problem for which efficient algorithms have been developed. However, none of those has yet been adapted to the context of macromolecular ensembles, particularly in a molecular dynamics (MD) framework. Here a set-decomposition algorithm is implemented which detects all contacting atoms or residues in maximum O(Nlog(N)) run-time, in contrast to the O(N2) complexity of a brute-force approach. Catalogue identifier: AEQA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 8945 No. of bytes in distributed program, including test data, etc.: 981604 Distribution format: tar.gz Programming language: C99. Computer: PC. Operating system: Linux. RAM: ≈Size of input frame Classification: 3, 4.14. External routines: Gromacs 4.6[1] Nature of problem: Finding atoms or residues that are closer to one another than a given cut-off. Solution method: Excluding distant atoms from distance calculations by decomposing the given set of atoms into disjoint subsets. Running time:≤O(Nlog(N)) References: [1] S. Pronk, S. Pall, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J.C. Smith, P. M. Kasson, D. van der Spoel, B. Hess and Erik Lindahl, Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit, Bioinformatics 29 (7) (2013).

  16. Polydopamine and eumelanin molecular structures investigated with ab initio calculations† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04692d Click here for additional data file.

    PubMed Central

    Chen, Chun-Teh; Martin-Martinez, Francisco J.; Jung, Gang Seob

    2017-01-01

    A set of computational methods that contains a brute-force algorithmic generation of chemical isomers, molecular dynamics (MD) simulations, and density functional theory (DFT) calculations is reported and applied to investigate nearly 3000 probable molecular structures of polydopamine (PDA) and eumelanin. All probable early-polymerized 5,6-dihydroxyindole (DHI) oligomers, ranging from dimers to tetramers, have been systematically analyzed to find the most stable geometry connections as well as to propose a set of molecular models that represents the chemically diverse nature of PDA and eumelanin. Our results indicate that more planar oligomers have a tendency to be more stable. This finding is in good agreement with recent experimental observations, which suggested that PDA and eumelanin are composed of nearly planar oligomers that appear to be stacked together via π–π interactions to form graphite-like layered aggregates. We also show that there is a group of tetramers notably more stable than the others, implying that even though there is an inherent chemical diversity in PDA and eumelanin, the molecular structures of the majority of the species are quite repetitive. Our results also suggest that larger oligomers are less likely to form. This observation is also consistent with experimental measurements, supporting the existence of small oligomers instead of large polymers as main components of PDA and eumelanin. In summary, this work brings an insight into the controversial structure of PDA and eumelanin, explaining some of the most important structural features, and providing a set of molecular models for more accurate modeling of eumelanin-like materials. PMID:28451292

  17. Spatial-temporal Variations and Source Apportionment of typical Heavy Metals in Beijing-Tianjin-Hebei (BTH) region of China Based on Localized Air Pollutants Emission Inventory and WRF-CMAQ modelling

    NASA Astrophysics Data System (ADS)

    Tian, H.; Liu, S.; Zhu, C.; Liu, H.; Wu, B.

    2017-12-01

    Abstract: Anthropogenic atmospheric emissions of air pollutants have caused worldwide concerns due to their adverse effects on human health and the ecosystem. By determining the best available emission factors for varied source categories, we established the comprehensive atmospheric emission inventories of hazardous air pollutants including 12 typical toxic heavy metals (Hg, As, Se, Pb, Cd, Cr, Ni, Sb, Mn, Co, Cu, and Zn) from primary anthropogenic activities in Beijing-Tianjin-Hebei (BTH) region of China for the period of 2012 for the first time. The annual emissions of these pollutants were allocated at a high spatial resolution of 9km × 9km grid with ArcGIS methodology and surrogate indexes, such as regional population and gross domestic product (GDP). Notably, the total heavy metal emissions from this region represented about 10.9% of the Chinese national total emissions. The areas with high emissions of heavy metals were mainly concentrated in Tangshan, Shijiazhuang, Handan and Tianjin. Further, WRF-CMAQ modeling system were applied to simulate the regional concentration of heavy metals to explore their spatial-temporal variations, and the source apportionment of these heavy metals in BTH region was performed using the Brute-Force method. Finally, integrated countermeasures were proposed to minimize the final air pollutants discharge on account of the current and future demand of energy-saving and pollution reduction in China. Keywords: heavy metals; particulate matter; emission inventory; CMAQ model; source apportionment Acknowledgment. This work was funded by the National Natural Science Foundation of China (21377012 and 21177012) and the Trail Special Program of Research on the Cause and Control Technology of Air Pollution under the National Key Research and Development Plan of China (2016YFC0201501).

  18. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  19. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  20. Press touch code: A finger press based screen size independent authentication scheme for smart devices.

    PubMed

    Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z

    2017-01-01

    Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.

  1. Press touch code: A finger press based screen size independent authentication scheme for smart devices

    PubMed Central

    Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.

    2017-01-01

    Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262

  2. A direct sensitivity approach to predict hourly ozone resulting from compliance with the National Ambient Air Quality Standard.

    PubMed

    Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian

    2013-03-05

    In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.

  3. The physics of bat biosonar

    NASA Astrophysics Data System (ADS)

    Müller, Rolf

    2011-10-01

    Bats have evolved one of the most capable and at the same time parsimonious sensory systems found in nature. Using active and passive biosonar as a major - and often sufficient - far sense, different bat species are able to master a wide variety of sensory tasks under very dissimilar sets of constraints. Given the limited computational resources of the bat's brain, this performance is unlikely to be explained as the result of brute-force, black-box-style computations. Instead, the animals must rely heavily on in-built physics knowledge in order to ensure that all required information is encoded reliably into the acoustic signals received at the ear drum. To this end, bats can manipulate the emitted and received signals in the physical domain: By diffracting the outgoing and incoming ultrasonic waves with intricate baffle shapes (i.e., noseleaves and outer ears), the animals can generate selectivity filters that are joint functions of space and frequency. To achieve this, bats employ structural features such as resonance cavities and diffracting ridges. In addition, some bat species can dynamically adjust the shape of their selectivity filters through muscular actuation.

  4. A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender

    2018-03-01

    As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.

  5. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  6. Clustering biomolecular complexes by residue contacts similarity.

    PubMed

    Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2012-07-01

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.

  7. Towards computational materials design from first principles using alchemical changes and derivatives.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Lilienfeld-Toal, Otto Anatole

    2010-11-01

    The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less

  8. Can genetic algorithms help virus writers reshape their creations and avoid detection?

    NASA Astrophysics Data System (ADS)

    Abu Doush, Iyad; Al-Saleh, Mohammed I.

    2017-11-01

    Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.

  9. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    NASA Astrophysics Data System (ADS)

    Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan

    2017-12-01

    As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  10. Predicting climate change: Uncertainties and prospects for surmounting them

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2008-03-01

    General circulation models (GCMs) are among the most detailed and sophisticated models of natural phenomena in existence. Still, the lack of robust and efficient subgrid-scale parametrizations for GCMs, along with the inherent sensitivity to initial data and the complex nonlinearities involved, present a major and persistent obstacle to narrowing the range of estimates for end-of-century warming. Estimating future changes in the distribution of climatic extrema is even more difficult. Brute-force tuning the large number of GCM parameters does not appear to help reduce the uncertainties. Andronov and Pontryagin (1937) proposed structural stability as a way to evaluate model robustness. Unfortunately, many real-world systems proved to be structurally unstable. We illustrate these concepts with a very simple model for the El Niño--Southern Oscillation (ENSO). Our model is governed by a differential delay equation with a single delay and periodic (seasonal) forcing. Like many of its more or less detailed and realistic precursors, this model exhibits a Devil's staircase. We study the model's structural stability, describe the mechanisms of the observed instabilities, and connect our findings to ENSO phenomenology. In the model's phase-parameter space, regions of smooth dependence on parameters alternate with rough, fractal ones. We then apply the tools of random dynamical systems and stochastic structural stability to the circle map and a torus map. The effect of noise with compact support on these maps is fairly intuitive: it is the most robust structures in phase-parameter space that survive the smoothing introduced by the noise. The nature of the stochastic forcing matters, thus suggesting that certain types of stochastic parametrizations might be better than others in achieving GCM robustness. This talk represents joint work with M. Chekroun, E. Simonnet and I. Zaliapin.

  11. Chemical reactions induced by oscillating external fields in weak thermal environments

    NASA Astrophysics Data System (ADS)

    Craven, Galen T.; Bartsch, Thomas; Hernandez, Rigoberto

    2015-02-01

    Chemical reaction rates must increasingly be determined in systems that evolve under the control of external stimuli. In these systems, when a reactant population is induced to cross an energy barrier through forcing from a temporally varying external field, the transition state that the reaction must pass through during the transformation from reactant to product is no longer a fixed geometric structure, but is instead time-dependent. For a periodically forced model reaction, we develop a recrossing-free dividing surface that is attached to a transition state trajectory [T. Bartsch, R. Hernandez, and T. Uzer, Phys. Rev. Lett. 95, 058301 (2005)]. We have previously shown that for single-mode sinusoidal driving, the stability of the time-varying transition state directly determines the reaction rate [G. T. Craven, T. Bartsch, and R. Hernandez, J. Chem. Phys. 141, 041106 (2014)]. Here, we extend our previous work to the case of multi-mode driving waveforms. Excellent agreement is observed between the rates predicted by stability analysis and rates obtained through numerical calculation of the reactive flux. We also show that the optimal dividing surface and the resulting reaction rate for a reactive system driven by weak thermal noise can be approximated well using the transition state geometry of the underlying deterministic system. This agreement persists as long as the thermal driving strength is less than the order of that of the periodic driving. The power of this result is its simplicity. The surprising accuracy of the time-dependent noise-free geometry for obtaining transition state theory rates in chemical reactions driven by periodic fields reveals the dynamics without requiring the cost of brute-force calculations.

  12. The force pyramid: a spatial analysis of force application during virtual reality brain tumor resection.

    PubMed

    Azarnoush, Hamed; Siar, Samaneh; Sawaya, Robin; Zhrani, Gmaan Al; Winkler-Schwartz, Alexander; Alotaibi, Fahad Eid; Bugdadi, Abdulgadir; Bajunaid, Khalid; Marwa, Ibrahim; Sabbagh, Abdulrahman Jafar; Del Maestro, Rolando F

    2017-07-01

    OBJECTIVE Virtual reality simulators allow development of novel methods to analyze neurosurgical performance. The concept of a force pyramid is introduced as a Tier 3 metric with the ability to provide visual and spatial analysis of 3D force application by any instrument used during simulated tumor resection. This study was designed to answer 3 questions: 1) Do study groups have distinct force pyramids? 2) Do handedness and ergonomics influence force pyramid structure? 3) Are force pyramids dependent on the visual and haptic characteristics of simulated tumors? METHODS Using a virtual reality simulator, NeuroVR (formerly NeuroTouch), ultrasonic aspirator force application was continually assessed during resection of simulated brain tumors by neurosurgeons, residents, and medical students. The participants performed simulated resections of 18 simulated brain tumors with different visual and haptic characteristics. The raw data, namely, coordinates of the instrument tip as well as contact force values, were collected by the simulator. To provide a visual and qualitative spatial analysis of forces, the authors created a graph, called a force pyramid, representing force sum along the z-coordinate for different xy coordinates of the tool tip. RESULTS Sixteen neurosurgeons, 15 residents, and 84 medical students participated in the study. Neurosurgeon, resident and medical student groups displayed easily distinguishable 3D "force pyramid fingerprints." Neurosurgeons had the lowest force pyramids, indicating application of the lowest forces, followed by resident and medical student groups. Handedness, ergonomics, and visual and haptic tumor characteristics resulted in distinct well-defined 3D force pyramid patterns. CONCLUSIONS Force pyramid fingerprints provide 3D spatial assessment displays of instrument force application during simulated tumor resection. Neurosurgeon force utilization and ergonomic data form a basis for understanding and modulating resident force application and improving patient safety during tumor resection.

  13. Loud and Clear

    ERIC Educational Resources Information Center

    Meier, Deborah

    2009-01-01

    In this article, the author talks about Ted Sizer and describes him as a "schoolman," a Mr. Chips figure with all the romance that surrounded that image. Accustomed to models of brute power, parents, teachers, bureaucrats, and even politicians were attracted to his message of common decency. There's a way of talking about, and to, school people…

  14. Individual Choice and Unequal Participation in Higher Education

    ERIC Educational Resources Information Center

    Voigt, Kristin

    2007-01-01

    Does the unequal participation of non-traditional students in higher education indicate social injustice, even if it can be traced back to individuals' choices? Drawing on luck egalitarian approaches,this article suggests that an answer to this question must take into account the effects of unequal brute luck on educational choices. I use a…

  15. Computing the binding affinity of a ligand buried deep inside a protein with the hybrid steered molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006

    Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less

  16. Confronting the Neo-Liberal Brute: Reflections of a Higher Education Middle-Level Manager

    ERIC Educational Resources Information Center

    Maistry, S. M.

    2012-01-01

    The higher education scenario in South Africa is fraught with tensions and contradictions. Publicly funded Higher Education Institutions (HEIs) face a particular dilemma. They are expected to fulfill a social mandate which requires a considered response to the needs of the communities in which they are located while simultaneously aspiring for…

  17. The Movable Type Method Applied to Protein-Ligand Binding.

    PubMed

    Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M

    2013-12-10

    Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free energy surface eliminating the need to estimate the enthalpy and entropy components individually. Finally, low free energy structures can be obtained via a free energy minimization procedure yielding all low free energy poses on a given free energy surface. Besides revolutionizing the protein-ligand docking and scoring problem this approach can be utilized in a wide range of applications in computational biology which involve the computation of free energies for systems with extensive phase spaces including protein folding, protein-protein docking and protein design.

  18. Asynchronous partial contact motion due to internal resonance in multiple degree-of-freedom rotordynamics

    NASA Astrophysics Data System (ADS)

    Shaw, A. D.; Champneys, A. R.; Friswell, M. I.

    2016-08-01

    Sudden onset of violent chattering or whirling rotor-stator contact motion in rotational machines can cause significant damage in many industrial applications. It is shown that internal resonance can lead to the onset of bouncing-type partial contact motion away from primary resonances. These partial contact limit cycles can involve any two modes of an arbitrarily high degree-of-freedom system, and can be seen as an extension of a synchronization condition previously reported for a single disc system. The synchronization formula predicts multiple drivespeeds, corresponding to different forms of mode-locked bouncing orbits. These results are backed up by a brute-force bifurcation analysis which reveals numerical existence of the corresponding family of bouncing orbits at supercritical drivespeeds, provided the damping is sufficiently low. The numerics reveal many overlapping families of solutions, which leads to significant multi-stability of the response at given drive speeds. Further, secondary bifurcations can also occur within each family, altering the nature of the response and ultimately leading to chaos. It is illustrated how stiffness and damping of the stator have a large effect on the number and nature of the partial contact solutions, illustrating the extreme sensitivity that would be observed in practice.

  19. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  20. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  1. Assessment of short-term PM2.5-related mortality due to different emission sources in the Yangtze River Delta, China

    NASA Astrophysics Data System (ADS)

    Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming

    2015-12-01

    Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.

  2. Large-scale detection of repetitions

    PubMed Central

    Smyth, W. F.

    2014-01-01

    Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872

  3. Towards identification of relevant variables in the observed aerosol optical depth bias between MODIS and AERONET observations

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Lary, D. J.; Gencaga, D.; Albayrak, A.; Wei, J.

    2013-08-01

    Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program.

  4. Security and matching of partial fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.

    2004-08-01

    Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).

  5. Saturn Apollo Program

    NASA Image and Video Library

    1967-07-28

    This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes

  6. Saturn Apollo Program

    NASA Image and Video Library

    1965-05-01

    This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes.

  7. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  8. Mapping PDB chains to UniProtKB entries.

    PubMed

    Martin, Andrew C R

    2005-12-01

    UniProtKB/SwissProt is the main resource for detailed annotations of protein sequences. This database provides a jumping-off point to many other resources through the links it provides. Among others, these include other primary databases, secondary databases, the Gene Ontology and OMIM. While a large number of links are provided to Protein Data Bank (PDB) files, obtaining a regularly updated mapping between UniProtKB entries and PDB entries at the chain or residue level is not straightforward. In particular, there is no regularly updated resource which allows a UniProtKB/SwissProt entry to be identified for a given residue of a PDB file. We have created a completely automatically maintained database which maps PDB residues to residues in UniProtKB/SwissProt and UniProtKB/trEMBL entries. The protocol uses links from PDB to UniProtKB, from UniProtKB to PDB and a brute-force sequence scan to resolve PDB chains for which no annotated link is available. Finally the sequences from PDB and UniProtKB are aligned to obtain a residue-level mapping. The resource may be queried interactively or downloaded from http://www.bioinf.org.uk/pdbsws/.

  9. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    PubMed Central

    2011-01-01

    Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987

  10. Thirty years since diffuse sound reflection by maximum length

    NASA Astrophysics Data System (ADS)

    Cox, Trevor J.; D'Antonio, Peter

    2005-09-01

    This year celebrates the 30th anniversary of Schroeder's seminal paper on sound scattering from maximum length sequences. This paper, along with Schroeder's subsequent publication on quadratic residue diffusers, broke new ground, because they contained simple recipes for designing diffusers with known acoustic performance. So, what has happened in the intervening years? As with most areas of engineering, the room acoustic diffuser has been greatly influenced by the rise of digital computing technologies. Numerical methods have become much more powerful, and this has enabled predictions of surface scattering to greater accuracy and for larger scale surfaces than previously possible. Architecture has also gone through a revolution where the forms of buildings have become more extreme and sculptural. Acoustic diffuser designs have had to keep pace with this to produce shapes and forms that are desirable to architects. To achieve this, design methodologies have moved away from Schroeder's simple equations to brute force optimization algorithms. This paper will look back at the past development of the modern diffuser, explaining how the principles of diffuser design have been devised and revised over the decades. The paper will also look at the present state-of-the art, and dreams for the future.

  11. Expert system for on-board satellite scheduling and control

    NASA Technical Reports Server (NTRS)

    Barry, John M.; Sary, Charisse

    1988-01-01

    An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.

  12. Using listener-based perceptual features as intermediate representations in music information retrieval.

    PubMed

    Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders

    2014-10-01

    The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.

  13. Rational design of DNA sequences for nanotechnology, microarrays and molecular computers using Eulerian graphs.

    PubMed

    Pancoska, Petr; Moravek, Zdenek; Moll, Ute M

    2004-01-01

    Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.

  14. Full counting statistics of conductance for disordered systems

    NASA Astrophysics Data System (ADS)

    Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian

    2017-09-01

    Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.

  15. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  16. A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns.

    PubMed

    Xu, W; LeBeau, J M

    2018-05-01

    We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of  ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Robust computation of dipole electromagnetic fields in arbitrarily anisotropic, planar-stratified environments.

    PubMed

    Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay

    2014-01-01

    We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.

  18. A new feedback image encryption scheme based on perturbation with dynamical compound chaotic sequence cipher generator

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Cui, Minggen; Wang, Zhu

    2009-07-01

    The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.

  19. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  20. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  1. Multibeam Gpu Transient Pipeline for the Medicina BEST-2 Array

    NASA Astrophysics Data System (ADS)

    Magro, A.; Hickish, J.; Adami, K. Z.

    2013-09-01

    Radio transient discovery using next generation radio telescopes will pose several digital signal processing and data transfer challenges, requiring specialized high-performance backends. Several accelerator technologies are being considered as prototyping platforms, including Graphics Processing Units (GPUs). In this paper we present a real-time pipeline prototype capable of processing multiple beams concurrently, performing Radio Frequency Interference (RFI) rejection through thresholding, correcting for the delay in signal arrival times across the frequency band using brute-force dedispersion, event detection and clustering, and finally candidate filtering, with the capability of persisting data buffers containing interesting signals to disk. This setup was deployed at the BEST-2 SKA pathfinder in Medicina, Italy, where several benchmarks and test observations of astrophysical transients were conducted. These tests show that on the deployed hardware eight 20 MHz beams can be processed simultaneously for 640 Dispersion Measure (DM) values. Furthermore, the clustering and candidate filtering algorithms employed prove to be good candidates for online event detection techniques. The number of beams which can be processed increases proportionally to the number of servers deployed and number of GPUs, making it a viable architecture for current and future radio telescopes.

  2. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    NASA Astrophysics Data System (ADS)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  3. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  4. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  5. Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment

    NASA Astrophysics Data System (ADS)

    Bhagawati, Dwipen

    2001-07-01

    Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.

  6. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  7. A teleoperation training simulator with visual and kinesthetic force virtual reality

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul

    1992-01-01

    A force-reflecting teleoperation training simulator with a high-fidelity real-time graphics display has been developed for operator training. A novel feature of this simulator is that it enables the operator to feel contact forces and torques through a force-reflecting controller during the execution of the simulated peg-in-hole task, providing the operator with the feel of visual and kinesthetic force virtual reality. A peg-in-hole task is used in our simulated teleoperation trainer as a generic teleoperation task. A quasi-static analysis of a two-dimensional peg-in-hole task model has been extended to a three-dimensional model analysis to compute contact forces and torques for a virtual realization of kinesthetic force feedback. The simulator allows the user to specify force reflection gains and stiffness (compliance) values of the manipulator hand for both the three translational and the three rotational axes in Cartesian space. Three viewing modes are provided for graphics display: single view, two split views, and stereoscopic view.

  8. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    NASA Astrophysics Data System (ADS)

    Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.

    2015-07-01

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  9. Geometrical force constraint method for vessel and x-ray angiogram simulation.

    PubMed

    Song, Shuang; Yang, Jian; Fan, Jingfan; Cong, Weijian; Ai, Danni; Zhao, Yitian; Wang, Yongtian

    2016-01-01

    This study proposes a novel geometrical force constraint method for 3-D vasculature modeling and angiographic image simulation. For this method, space filling force, gravitational force, and topological preserving force are proposed and combined for the optimization of the topology of the vascular structure. The surface covering force and surface adhesion force are constructed to drive the growth of the vasculature on any surface. According to the combination effects of the topological and surface adhering forces, a realistic vasculature can be effectively simulated on any surface. The image projection of the generated 3-D vascular structures is simulated according to the perspective projection and energy attenuation principles of X-rays. Finally, the simulated projection vasculature is fused with a predefined angiographic mask image to generate a realistic angiogram. The proposed method is evaluated on a CT image and three generally utilized surfaces. The results fully demonstrate the effectiveness and robustness of the proposed method.

  10. Ozone and its potential control strategy for Chon Buri city, Thailand.

    PubMed

    Prabamroong, Thayukorn; Manomaiphiboon, Kasemsan; Limpaseni, Wongpun; Sukhapan, Jariya; Bonnet, Sebastien

    2012-12-01

    This work studies O3 pollution for Chon Buri city in the eastern region of Thailand, where O3 has become an increased and serious concern in the last decade. It includes emission estimation and photochemical box modeling in support of investigating the underlying nature of O3 formation over the city and the roles of precursors emitted from sources. The year 2006 was considered and two single-day episodes (January 29 and February 14) were chosen for simulations. It was found that, in the city, the industrial sector is the largest emissions contributor for every O3 precursor (i.e., NO(x), non-methane volatile organic compounds or NMVOC, and CO), followed by on-road mobile group. Fugitive NMVOC is relatively large, emitted mainly from oil refineries and tank farms. Simulated results acceptably agree with observations for daily maximum O3 level in both episodes and evidently indicate the VOC-sensitive regime for O3 formation. This regime is also substantiated by morning NMVOC/NO(x) ratios observed in the city. The corresponding O3 isopleth diagrams suggest NMVOC control alone to lower elevated O3. In seeking a potential O3 control strategy for the city, a combination of brute-force sensitivity tests, an experimental design, statistical modeling, and cost optimization was employed. A number of emission subgroups were found to significantly contribute to O3 formation, based on the February 14 episode, for example, oil refinery (fugitive), tank farm (fugitive), passenger car (gasoline), and motorcycle (gasoline). But the cost-effective strategy suggests control only on the first two subgroups to meet the standard. The cost of implementing the strategy was estimated and found to be small (only 0.2%) compared to the gross provincial product generated by the entire province where the city is located within. These findings could be useful as a needed guideline to support O3 management for the city. Elevated O3 in the urban and industrial city of Chon Buri needs better understanding of the problem and technical guidelines for its management. With a city-specific emission inventory and air quality modeling, O3 formation was found to be VOC sensitive, and a cost-effective control strategy developed highlights fugitive emissions from the industrial sector to be controlled.

  11. Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example

    NASA Technical Reports Server (NTRS)

    Smart, D. F.; Shea, M. A.

    2001-01-01

    The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.

  12. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption.

    PubMed

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2(1345). As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  13. A cubic map chaos criterion theorem with applications in generalized synchronization based pseudorandom number generator and image encryption

    NASA Astrophysics Data System (ADS)

    Yang, Xiuping; Min, Lequan; Wang, Xue

    2015-05-01

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% key streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 21345. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiuping, E-mail: yangxiuping-1990@163.com; Min, Lequan, E-mail: minlequan@sina.com; Wang, Xue, E-mail: wangxue-20130818@163.com

    This paper sets up a chaos criterion theorem on a kind of cubic polynomial discrete maps. Using this theorem, Zhou-Song's chaos criterion theorem on quadratic polynomial discrete maps and generalized synchronization (GS) theorem construct an eight-dimensional chaotic GS system. Numerical simulations have been carried out to verify the effectiveness of theoretical results. The chaotic GS system is used to design a chaos-based pseudorandom number generator (CPRNG). Using FIPS 140-2 test suit/Generalized FIPS 140-2, test suit tests the randomness of two 1000 key streams consisting of 20 000 bits generated by the CPRNG, respectively. The results show that there are 99.9%/98.5% keymore » streams to have passed the FIPS 140-2 test suit/Generalized FIPS 140-2 test. Numerical simulations show that the different keystreams have an average 50.001% same codes. The key space of the CPRNG is larger than 2{sup 1345}. As an application of the CPRNG, this study gives an image encryption example. Experimental results show that the linear coefficients between the plaintext and the ciphertext and the decrypted ciphertexts via the 100 key streams with perturbed keys are less than 0.00428. The result suggests that the decrypted texts via the keystreams generated via perturbed keys of the CPRNG are almost completely independent on the original image text, and brute attacks are needed to break the cryptographic system.« less

  15. Pre-impact lower extremity posture and brake pedal force predict foot and ankle forces during an automobile collision.

    PubMed

    Hardin, E C; Su, A; van den Bogert, A J

    2004-12-01

    The purpose of this study was to determine how a driver's foot and ankle forces during a frontal vehicle collision depend on initial lower extremity posture and brake pedal force. A 2D musculoskeletal model with seven segments and six right-side muscle groups was used. A simulation of a three-second braking task found 3647 sets of muscle activation levels that resulted in stable braking postures with realistic pedal force. These activation patterns were then used in impact simulations where vehicle deceleration was applied and driver movements and foot and ankle forces were simulated. Peak rearfoot ground reaction force (F(RF)), peak Achilles tendon force (FAT), peak calcaneal force (F(CF)) and peak ankle joint force (F(AJ)) were calculated. Peak forces during the impact simulation were 476 +/- 687 N (F(RF)), 2934 +/- 944 N (F(CF)) and 2449 +/- 918 N (F(AJ)). Many simulations resulted in force levels that could cause fractures. Multivariate quadratic regression determined that the pre-impact brake pedal force (PF), knee angle (KA) and heel distance (HD) explained 72% of the variance in peak FRF, 62% in peak F(CF) and 73% in peak F(AJ). Foot and ankle forces during a collision depend on initial posture and pedal force. Braking postures with increased knee flexion, while keeping the seat position fixed, are associated with higher foot and ankle forces during a collision.

  16. Experimental verification of dynamic simulation

    NASA Technical Reports Server (NTRS)

    Yae, K. Harold; Hwang, Howyoung; Chern, Su-Tai

    1989-01-01

    The dynamics model here is a backhoe, which is a four degree of freedom manipulator from the dynamics standpoint. Two types of experiment are chosen that can also be simulated by a multibody dynamics simulation program. In the experiment, recorded were the configuration and force histories; that is, velocity and position, and force output and differential pressure change from the hydraulic cylinder, in the time domain. When the experimental force history is used as driving force in the simulation model, the forward dynamics simulation produces a corresponding configuration history. Then, the experimental configuration history is used in the inverse dynamics analysis to generate a corresponding force history. Therefore, two sets of configuration and force histories--one set from experiment, and the other from the simulation that is driven forward and backward with the experimental data--are compared in the time domain. More comparisons are made in regard to the effects of initial conditions, friction, and viscous damping.

  17. Design of teleoperation system with a force-reflecting real-time simulator

    NASA Technical Reports Server (NTRS)

    Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito

    1994-01-01

    We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.

  18. Numerical Simulations for Distribution Characteristics of Internal Forces on Segments of Tunnel Linings

    NASA Astrophysics Data System (ADS)

    Li, Shouju; Shangguan, Zichang; Cao, Lijuan

    A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.

  19. The Role of Molecular Dynamics Potential of Mean Force Calculations in the Investigation of Enzyme Catalysis.

    PubMed

    Yang, Y; Pan, L; Lightstone, F C; Merz, K M

    2016-01-01

    The potential of mean force simulations, widely applied in Monte Carlo or molecular dynamics simulations, are useful tools to examine the free energy variation as a function of one or more specific reaction coordinate(s) for a given system. Implementation of the potential of mean force in the simulations of biological processes, such as enzyme catalysis, can help overcome the difficulties of sampling specific regions on the energy landscape and provide useful insights to understand the catalytic mechanism. The potential of mean force simulations usually require many, possibly parallelizable, short simulations instead of a few extremely long simulations and, therefore, are fairly manageable for most research facilities. In this chapter, we provide detailed protocols for applying the potential of mean force simulations to investigate enzymatic mechanisms for several different enzyme systems. © 2016 Elsevier Inc. All rights reserved.

  20. Divergence compensation for hardware-in-the-loop simulation of stiffness-varying discrete contact in space

    NASA Astrophysics Data System (ADS)

    Qi, Chenkun; Zhao, Xianchao; Gao, Feng; Ren, Anye; Hu, Yan

    2016-11-01

    The hardware-in-the-loop (HIL) contact simulation for flying objects in space is challenging due to the divergence caused by the time delay. In this study, a divergence compensation approach is proposed for the stiffness-varying discrete contact. The dynamic response delay of the motion simulator and the force measurement delay are considered. For the force measurement delay, a phase lead based force compensation approach is used. For the dynamic response delay of the motion simulator, a response error based force compensation approach is used, where the compensation force is obtained from the real-time identified contact stiffness and real-time measured position response error. The dynamic response model of the motion simulator is not required. The simulations and experiments show that the simulation divergence can be compensated effectively and satisfactorily by using the proposed approach.

  1. Some Factors Influencing Air Force Simulator Training Effectiveness. Technical Report.

    ERIC Educational Resources Information Center

    Caro, Paul W.

    A study of U.S. Air Force simulator training was conducted to identify factors that influence the effectiveness of such training and to learn how its effectiveness is being determined. The research consisted of a survey of ten representative Air Force simulator training programs and a review of the simulator training research literature. A number…

  2. Simulation of Shuttle launch G forces and acoustic loads using the NASA Ames Research Center 20G centrifuge

    NASA Technical Reports Server (NTRS)

    Shaw, T. L.; Corliss, J. M.; Gundo, D. P.; Mulenburg, G. M.; Breit, G. A.; Griffith, J. B.

    1994-01-01

    The high cost and long times required to develop research packages for space flight can often be offset by using ground test techniques. This paper describes a space shuttle launch and reentry simulating using the NASA Ames Research Center's 20G centrifuge facility. The combined G-forces and acoustic environment during shuttle launch and landing were simulated to evaluate the effect on a payload of laboratory rates. The launch G force and acoustic profiles are matched to actual shuttle launch data to produce the required G-forces and acoustic spectrum in the centrifuge test cab where the rats were caged on a free-swinging platform. For reentry, only G force is simulated as the aero-acoustic noise is insignificant compared to that during launch. The shuttle G-force profiles of launch and landing are achieved by programming the centrifuge drive computer to continuously adjust centrifuge rotational speed to obtain the correct launch and landing G forces. The shuttle launch acoustic environment is simulated using a high-power, low-frequency audio system. Accelerometer data from STS-56 and microphone data from STS-1 through STS-5 are used as baselines for the simulations. This paper provides a description of the test setup and the results of the simulation with recommendations for follow-on simulations.

  3. In vivo biomechanical measurement and haptic simulation of portal placement procedure in shoulder arthroscopic surgery

    PubMed Central

    Chae, Sanghoon; Jung, Sung-Weon

    2018-01-01

    A survey of 67 experienced orthopedic surgeons indicated that precise portal placement was the most important skill in arthroscopic surgery. However, none of the currently available virtual reality simulators include simulation / training in portal placement, including haptic feedback of the necessary puncture force. This study aimed to: (1) measure the in vivo force and stiffness during a portal placement procedure in an actual operating room and (2) implement active haptic simulation of a portal placement procedure using the measured in vivo data. We measured the force required for port placement and the stiffness of the joint capsule during portal placement procedures performed by an experienced arthroscopic surgeon. Based on the acquired mechanical property values, we developed a cable-driven active haptic simulator designed to train the portal placement skill and evaluated the validity of the simulated haptics. Ten patients diagnosed with rotator cuff tears were enrolled in this experiment. The maximum peak force and joint capsule stiffness during posterior portal placement procedures were 66.46 (±10.76N) and 2560.82(±252.92) N/m, respectively. We then designed an active haptic simulator using the acquired data. Our cable-driven mechanism structure had a friction force of 3.763 ± 0.341 N, less than 6% of the mean puncture force. Simulator performance was evaluated by comparing the target stiffness and force with the stiffness and force reproduced by the device. R-squared values were 0.998 for puncture force replication and 0.902 for stiffness replication, indicating that the in vivo data can be used to implement a realistic haptic simulator. PMID:29494691

  4. Sticking properties of ice grains

    NASA Astrophysics Data System (ADS)

    Jongmanns, M.; Kumm, M.; Wurm, G.; Wolf, D. E.; Teiser, J.

    2017-06-01

    We study the size dependence of pull-off forces of water ice in laboratory experiments and numerical simulations. To determine the pull-off force in our laboratory experiments, we use a liquid nitrogen cooled centrifuge. Depending on its rotation frequency, spherical ice grains detach due to the centrifugal force which is related to the adhesive properties. Numerical simulations are conducted by means of molecular dynamics simulations of hexagonal ice using a standard coarse-grained water potential. The pull-off force of a single contact between two spherical ice grains is measured due to strain controlled simulations. Both, the experimental study and the simulations reveal a dependence between the pull-off force and the (reduced) particle radii, which differ significantly from the linear dependence of common contact theories.

  5. The evolution of parental care in insects: A test of current hypotheses

    PubMed Central

    Gilbert, James D J; Manica, Andrea

    2015-01-01

    Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced—for example, by caring for multiple broods—but rejecting the “enhanced fecundity” hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. PMID:25825047

  6. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  7. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  8. In search of robust flood risk management alternatives for the Netherlands

    NASA Astrophysics Data System (ADS)

    Klijn, F.; Knoop, J. M.; Ligtvoet, W.; Mens, M. J. P.

    2012-05-01

    The Netherlands' policy for flood risk management is being revised in view of a sustainable development against a background of climate change, sea level rise and increasing socio-economic vulnerability to floods. This calls for a thorough policy analysis, which can only be adequate when there is agreement about the "framing" of the problem and about the strategic alternatives that should be taken into account. In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood risks, and defined a number of different strategic alternatives for flood risk management at the national level. These alternatives, ranging from flood protection by brute force to reduction of the vulnerability by spatial planning only, were compared with continuation of the current policy on a number of criteria, comprising costs, the reduction of fatality risk and economic risk, and their robustness in relation to uncertainties. We found that a change of policy away from conventional embankments towards gaining control over the flooding process by making the embankments unbreachable is attractive. By thus influencing exposure to flooding, the fatality risk can be effectively reduced at even lower net societal costs than by continuation of the present policy or by raising the protection standards where cost-effective.

  9. Security analysis and improvements to the PsychoPass method.

    PubMed

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  10. High Performance Analytics with the R3-Cache

    NASA Astrophysics Data System (ADS)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  11. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  12. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  13. Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data

    PubMed Central

    Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick

    2017-01-01

    Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613

  14. Automated detection and cataloging of global explosive volcanism using the International Monitoring System infrasound network

    NASA Astrophysics Data System (ADS)

    Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars

    2017-04-01

    We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.

  15. Axicons, prisms and integrators: searching for simple laser beam shaping solutions

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd

    2010-08-01

    Over the last thirty five years there have been many papers presented at numerous conferences and published within a host of optical journals. What is presented in many cases is either too exotic or technically challenging in practical application terms and it could be said both are testaments to the imagination of engineers and researchers. For many brute force laser processing applications such as paint stripping, large area ablation or general skiving of flex circuits, the opportunity to use a beam shaper that is inexpensive is a welcomed tool. Shaping the laser beam for less demanding applications, provides for a more uniform removal rate and increases the overall quality of the part being processed. It is a well known fact customers like their parts to look good. Many times, complex optical beam shaping techniques are considered because no one is aware of the historical solutions that have been lost to the ages. These complex solutions can range in price from 10,000 to 60,000 and require many months to design and fabricate. This paper will provide an overview of various beam shaping techniques that are both elegant and simple in concept and design. Optical techniques using axicons, prisms and reflective integrators will be discussed in an overview format.

  16. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  17. Security Analysis and Improvements to the PsychoPass Method

    PubMed Central

    2013-01-01

    Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458

  18. The Taming of the Shrew

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, M.

    1996-11-01

    Considering the extreme complexity of the turbulence problem in general and the unattainability of first-principles analytical solutions in particular, it is not surprising that controlling a turbulent flow remains a challenging task, mired in empiricism and unfulfilled promises and aspirations. Brute force suppression, or taming, of turbulence via active control strategies is always possible, but the penalty for doing so often exceeds any potential savings. The artifice is to achieve a desired effect with minimum energy expenditure. Spurred by the recent developments in chaos control, microfabrication and neural networks, efficient reactive control of turbulent flows, where the control input is optimally adjusted based on feedforward or feedback measurements, is now in the realm of the possible for future practical devices. But regardless of how the problem is approached, combating turbulence is always as arduous as the taming of the shrew. The former task will be emphasized during the oral presentation, but for this abstract we reflect on a short verse from the latter. From William Shakespeare's The Taming of the Shrew. Curtis (Petruchio's servant, in charge of his country house): Is she so hot a shrew as she's reported? Grumio (Petruchio's personal lackey): She was, good Curtis, before this frost. But thou know'st winter tames man, woman, and beast; for it hath tamed my old master, and my new mistress, and myself, fellow Curtis.

  19. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  20. Lipid-converter, a framework for lipid manipulations in molecular dynamics simulations

    PubMed Central

    Larsson, Per; Kasson, Peter M.

    2014-01-01

    Construction of lipid membrane and membrane protein systems for molecular dynamics simulations can be a challenging process. In addition, there are few available tools to extend existing studies by repeating simulations using other force fields and lipid compositions. To facilitate this, we introduce lipidconverter, a modular Python framework for exchanging force fields and lipid composition in coordinate files obtained from simulations. Force fields and lipids are specified by simple text files, making it easy to introduce support for additional force fields and lipids. The converter produces simulation input files that can be used for structural relaxation of the new membranes. PMID:25081234

  1. Preparing for Large-Force Exercises with Distributed Simulation: A Panel Presentation

    DTIC Science & Technology

    2010-07-01

    Preparing for Large Force Exercises with Distributed Simulation: A Panel Presentation Peter Crane, Winston Bennett, Michael France Air Force...used distributed simulation training to complement live-fly exercises to prepare for LFEs. In this panel presentation , the speakers will describe... presentations on how detailed analysis of training needs is necessary to structure simulator scenarios and how future training exercises could be made more

  2. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  3. Operational Risk Preparedness: General George H. Thomas and the Franklin-Nashville Campaign

    DTIC Science & Technology

    2014-05-22

    monograph analyzes and compares thoughts on risk from multiple disciplines and viewpoints to develop a suitable definition and corresponding principles...sounds similar to Sun Tzu: " from the enemy’s character, from his institutions, the state of his affairs and his general situation, each side, using...changes through brute strength, but do not gain from change, they merely continue to exist. He therefore introduced the term antifragile—a system that

  4. Evaluation of a musculoskeletal model with prosthetic knee through six experimental gait trials.

    PubMed

    Kia, Mohammad; Stylianou, Antonis P; Guess, Trent M

    2014-03-01

    Knowledge of the forces acting on musculoskeletal joint tissues during movement benefits tissue engineering, artificial joint replacement, and our understanding of ligament and cartilage injury. Computational models can be used to predict these internal forces, but musculoskeletal models that simultaneously calculate muscle force and the resulting loading on joint structures are rare. This study used publicly available gait, skeletal geometry, and instrumented prosthetic knee loading data [1] to evaluate muscle driven forward dynamics simulations of walking. Inputs to the simulation were measured kinematics and outputs included muscle, ground reaction, ligament, and joint contact forces. A full body musculoskeletal model with subject specific lower extremity geometries was developed in the multibody framework. A compliant contact was defined between the prosthetic femoral component and tibia insert geometries. Ligament structures were modeled with a nonlinear force-strain relationship. The model included 45 muscles on the right lower leg. During forward dynamics simulations a feedback control scheme calculated muscle forces using the error signal between the current muscle lengths and the lengths recorded during inverse kinematics simulations. Predicted tibio-femoral contact force, ground reaction forces, and muscle forces were compared to experimental measurements for six different gait trials using three different gait types (normal, trunk sway, and medial thrust). The mean average deviation (MAD) and root mean square deviation (RMSD) over one gait cycle are reported. The muscle driven forward dynamics simulations were computationally efficient and consistently reproduced the inverse kinematics motion. The forward simulations also predicted total knee contact forces (166N

  5. Effects of Internal Waves on Sound Propagation in the Shallow Waters of the Continental Shelves

    DTIC Science & Technology

    2016-09-01

    experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for the effects of...internal waves in the experiment area were largely generated by tidal forcing. Compared to simulations without internal waves , simulations accounting for...IN THE SHALLOW WATERS OF THE CONTINENTAL SHELVES ..................................4  1.  Internal Tides—Internal Waves Generated by Tidal Forcing

  6. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  7. Validating empirical force fields for molecular-level simulation of cellulose dissolution

    USDA-ARS?s Scientific Manuscript database

    The calculations presented here, which include dynamics simulations using analytical force fields and first principles studies, indicate that the COMPASS force field is preferred over the Dreiding and Universal force fields for studying dissolution of large cellulose structures. The validity of thes...

  8. Cutting process simulation of flat drill

    NASA Astrophysics Data System (ADS)

    Tamura, Shoichi; Matsumura, Takashi

    2018-05-01

    Flat drills at a point angle of 180 deg. have recently been developed for drilling of automobile parts with the inclination of the workpiece surfaces. The paper studies the cutting processes of the flat drills in the analytical simulation. A predictive force model is applied to simulation of the cutting force with the chip flow direction. The chip flow model is piled up with orthogonal cuttings in the plane containing the cutting velocities and the chip flow velocities, in which the chip flow direction is determined to minimize the cutting energy. Then, the cutting force is predicted in the determined in the chip flow model. The typical cutting force of the flat drill is discussed with comparing to that of the standard drill. The typical differences are confirmed in the cutting force change during the tool engagement and disengagement. The cutting force, then, is simulated in drilling for an inclined workpiece with a flat drill. The horizontal components in the cutting forces are simulated with changing the inclination angle of the plate. The horizontal force component in the flat drilling is stable to be controlled in terms of the machining accuracy and the tool breakage.

  9. Sensitivity of estimated muscle force in forward simulation of normal walking

    PubMed Central

    Xiao, Ming; Higginson, Jill

    2009-01-01

    Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485

  10. Systematic Validation of Protein Force Fields against Experimental Data

    PubMed Central

    Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2012-01-01

    Molecular dynamics simulations provide a vehicle for capturing the structures, motions, and interactions of biological macromolecules in full atomic detail. The accuracy of such simulations, however, is critically dependent on the force field—the mathematical model used to approximate the atomic-level forces acting on the simulated molecular system. Here we present a systematic and extensive evaluation of eight different protein force fields based on comparisons of experimental data with molecular dynamics simulations that reach a previously inaccessible timescale. First, through extensive comparisons with experimental NMR data, we examined the force fields' abilities to describe the structure and fluctuations of folded proteins. Second, we quantified potential biases towards different secondary structure types by comparing experimental and simulation data for small peptides that preferentially populate either helical or sheet-like structures. Third, we tested the force fields' abilities to fold two small proteins—one α-helical, the other with β-sheet structure. The results suggest that force fields have improved over time, and that the most recent versions, while not perfect, provide an accurate description of many structural and dynamical properties of proteins. PMID:22384157

  11. Force-Sensing Enhanced Simulation Environment (ForSense) for laparoscopic surgery training and assessment.

    PubMed

    Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara

    2015-04-01

    Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. The Detection and Attribution Model Intercomparison Project (DAMIP v1.0)contribution to CMIP6

    DOE PAGES

    Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; ...

    2016-10-18

    Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of futuremore » climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.« less

  13. The Detection and Attribution Model Intercomparison Project (DAMIP v1.0) contribution to CMIP6

    NASA Astrophysics Data System (ADS)

    Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; Hegerl, Gabriele; Knutti, Reto; Matthes, Katja; Santer, Benjamin D.; Stone, Daithi; Tebaldi, Claudia

    2016-10-01

    Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of future climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.

  14. Effect of the centrifugal force on domain chaos in Rayleigh-Bénard convection.

    PubMed

    Becker, Nathan; Scheel, J D; Cross, M C; Ahlers, Guenter

    2006-06-01

    Experiments and simulations from a variety of sample sizes indicated that the centrifugal force significantly affects the domain-chaos state observed in rotating Rayleigh-Bénard convection-patterns. In a large-aspect-ratio sample, we observed a hybrid state consisting of domain chaos close to the sample center, surrounded by an annulus of nearly stationary nearly radial rolls populated by occasional defects reminiscent of undulation chaos. Although the Coriolis force is responsible for domain chaos, by comparing experiment and simulation we show that the centrifugal force is responsible for the radial rolls. Furthermore, simulations of the Boussinesq equations for smaller aspect ratios neglecting the centrifugal force yielded a domain precession-frequency f approximately epsilon(mu) with mu approximately equal to 1 as predicted by the amplitude-equation model for domain chaos, but contradicted by previous experiment. Additionally the simulations gave a domain size that was larger than in the experiment. When the centrifugal force was included in the simulation, mu and the domain size were consistent with experiment.

  15. Contact stiffness and damping identification for hardware-in-the-loop contact simulator with measurement delay compensation

    NASA Astrophysics Data System (ADS)

    Qi, Chenkun; Zhao, Xianchao; Gao, Feng; Ren, Anye; Sun, Qiao

    2016-06-01

    The hardware-in-the-loop (HIL) contact simulator is to simulate the contact process of two flying objects in space. The contact stiffness and damping are important parameters used for the process monitoring, compliant contact control and force compensation control. In this study, a contact stiffness and damping identification approach is proposed for the HIL contact simulation with the force measurement delay. The actual relative position of two flying objects can be accurately measured. However, the force measurement delay needs to be compensated because it will lead to incorrect stiffness and damping identification. Here, the phase lead compensation is used to reconstruct the actual contact force from the delayed force measurement. From the force and position data, the contact stiffness and damping are identified in real time using the recursive least squares (RLS) method. The simulations and experiments are used to verify that the proposed stiffness and damping identification approach is effective.

  16. Comparison of Cellulose Iβ Simulations with Three Carbohydrate Force Fields.

    PubMed

    Matthews, James F; Beckham, Gregg T; Bergenstråhle-Wohlert, Malin; Brady, John W; Himmel, Michael E; Crowley, Michael F

    2012-02-14

    Molecular dynamics simulations of cellulose have recently become more prevalent due to increased interest in renewable energy applications, and many atomistic and coarse-grained force fields exist that can be applied to cellulose. However, to date no systematic comparison between carbohydrate force fields has been conducted for this important system. To that end, we present a molecular dynamics simulation study of hydrated, 36-chain cellulose Iβ microfibrils at room temperature with three carbohydrate force fields (CHARMM35, GLYCAM06, and Gromos 45a4) up to the near-microsecond time scale. Our results indicate that each of these simulated microfibrils diverge from the cellulose Iβ crystal structure to varying degrees under the conditions tested. The CHARMM35 and GLYCAM06 force fields eventually result in structures similar to those observed at 500 K with the same force fields, which are consistent with the experimentally observed high-temperature behavior of cellulose I. The third force field, Gromos 45a4, produces behavior significantly different from experiment, from the other two force fields, and from previously reported simulations with this force field using shorter simulation times and constrained periodic boundary conditions. For the GLYCAM06 force field, initial hydrogen-bond conformations and choice of electrostatic scaling factors significantly affect the rate of structural divergence. Our results suggest dramatically different time scales for convergence of properties of interest, which is important in the design of computational studies and comparisons to experimental data. This study highlights that further experimental and theoretical work is required to understand the structure of small diameter cellulose microfibrils typical of plant cellulose.

  17. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  18. A Force Balanced Fragmentation Method for ab Initio Molecular Dynamic Simulation of Protein.

    PubMed

    Xu, Mingyuan; Zhu, Tong; Zhang, John Z H

    2018-01-01

    A force balanced generalized molecular fractionation with conjugate caps (FB-GMFCC) method is proposed for ab initio molecular dynamic simulation of proteins. In this approach, the energy of the protein is computed by a linear combination of the QM energies of individual residues and molecular fragments that account for the two-body interaction of hydrogen bond between backbone peptides. The atomic forces on the caped H atoms were corrected to conserve the total force of the protein. Using this approach, ab initio molecular dynamic simulation of an Ace-(ALA) 9 -NME linear peptide showed the conservation of the total energy of the system throughout the simulation. Further a more robust 110 ps ab initio molecular dynamic simulation was performed for a protein with 56 residues and 862 atoms in explicit water. Compared with the classical force field, the ab initio molecular dynamic simulations gave better description of the geometry of peptide bonds. Although further development is still needed, the current approach is highly efficient, trivially parallel, and can be applied to ab initio molecular dynamic simulation study of large proteins.

  19. Virtual Reality Tumor Resection: The Force Pyramid Approach.

    PubMed

    Sawaya, Robin; Bugdadi, Abdulgadir; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Alotaibi, Fahad E; Bajunaid, Khalid; AlZhrani, Gmaan A; Alsideiri, Ghusn; Sabbagh, Abdulrahman J; Del Maestro, Rolando F

    2018-06-01

    The force pyramid is a novel visual representation allowing spatial delineation of instrument force application during surgical procedures. In this study, the force pyramid concept is employed to create and quantify dominant hand, nondominant hand, and bimanual force pyramids during resection of virtual reality brain tumors. To address 4 questions: Do ergonomics and handedness influence force pyramid structure? What are the differences between dominant and nondominant force pyramids? What is the spatial distribution of forces applied in specific tumor quadrants? What differentiates "expert" and "novice" groups regarding their force pyramids? Using a simulated aspirator in the dominant hand and a simulated sucker in the nondominant hand, 6 neurosurgeons and 14 residents resected 8 different tumors using the CAE NeuroVR virtual reality neurosurgical simulation platform (CAE Healthcare, Montréal, Québec and the National Research Council Canada, Boucherville, Québec). Position and force data were used to create force pyramids and quantify tumor quadrant force distribution. Force distribution quantification demonstrates the critical role that handedness and ergonomics play on psychomotor performance during simulated brain tumor resections. Neurosurgeons concentrate their dominant hand forces in a defined crescent in the lower right tumor quadrant. Nondominant force pyramids showed a central peak force application in all groups. Bimanual force pyramids outlined the combined impact of each hand. Distinct force pyramid patterns were seen when tumor stiffness, border complexity, and color were altered. Force pyramids allow delineation of specific tumor regions requiring greater psychomotor ability to resect. This information can focus and improve resident technical skills training.

  20. From brute luck to option luck? On genetics, justice, and moral responsibility in reproduction.

    PubMed

    Denier, Yvonne

    2010-04-01

    The structure of our ethical experience depends, crucially, on a fundamental distinction between what we are responsible for doing or deciding and what is given to us. As such, the boundary between chance and choice is the spine of our conventional morality, and any serious shift in that boundary is thoroughly dislocating. Against this background, I analyze the way in which techniques of prenatal genetic diagnosis (PGD) pose such a fundamental challenge to our conventional ideas of justice and moral responsibility. After a short description of the situation, I first examine the influential luck egalitarian theory of justice, which is based on the distinction between choice and luck or, more specifically, between option luck and brute luck, and the way in which it would approach PGD (section II), followed by an analysis of the conceptual incoherencies (in section III) and moral problems (in section IV) that come with such an approach. Put shortly, the case of PGD shows that the luck egalitarian approach fails to express equal respect for the individual choices of people. The paradox of the matter is that by overemphasizing the fact of choice as such, without regard for the social framework in which they are being made, or for the fundamental and existential nature of particular choices-like choosing to have children and not to undergo PGD or not to abort a handicapped fetus-such choices actually become impossible.

  1. Determination of Quantum Chemistry Based Force Fields for Molecular Dynamics Simulations of Aromatic Polymers

    NASA Technical Reports Server (NTRS)

    Jaffe, Richard; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    Ab initio quantum chemistry calculations for model molecules can be used to parameterize force fields for molecular dynamics simulations of polymers. Emphasis in our research group is on using quantum chemistry-based force fields for molecular dynamics simulations of organic polymers in the melt and glassy states, but the methodology is applicable to simulations of small molecules, multicomponent systems and solutions. Special attention is paid to deriving reliable descriptions of the non-bonded and electrostatic interactions. Several procedures have been developed for deriving and calibrating these parameters. Our force fields for aromatic polyimide simulations will be described. In this application, the intermolecular interactions are the critical factor in determining many properties of the polymer (including its color).

  2. Chemistry-Climate Model Simulations of Twenty-First Century Stratospheric Climate and Circulation Changes

    DTIC Science & Technology

    2010-10-15

    cycle under volcanically clean aerosol conditions. Those models that do not reproduce a quasi- biennial oscillation ( QBO ) also include a relaxation...forc- ing toward the observed QBO (Giorgetta and Bengtsson 1999) for the SCN2 simulations. Table 2 summarizes the simulations used in this study and any...However simulations from three of the models included a future solar forcing and two models included an artificial QBO forcing in the tropics (see

  3. Simulations of stretching a flexible polyelectrolyte with varying charge separation

    DOE PAGES

    Stevens, Mark J.; Saleh, Omar A.

    2016-07-22

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  4. Principal Component Analysis of Lipid Molecule Conformational Changes in Molecular Dynamics Simulations.

    PubMed

    Buslaev, Pavel; Gordeliy, Valentin; Grudinin, Sergei; Gushchin, Ivan

    2016-03-08

    Molecular dynamics simulations of lipid bilayers are ubiquitous nowadays. Usually, either global properties of the bilayer or some particular characteristics of each lipid molecule are evaluated in such simulations, but the structural properties of the molecules as a whole are rarely studied. Here, we show how a comprehensive quantitative description of conformational space and dynamics of a single lipid molecule can be achieved via the principal component analysis (PCA). We illustrate the approach by analyzing and comparing simulations of DOPC bilayers obtained using eight different force fields: all-atom generalized AMBER, CHARMM27, CHARMM36, Lipid14, and Slipids and united-atom Berger, GROMOS43A1-S3, and GROMOS54A7. Similarly to proteins, most of the structural variance of a lipid molecule can be described by only a few principal components. These major components are similar in different simulations, although there are notable distinctions between the older and newer force fields and between the all-atom and united-atom force fields. The DOPC molecules in the simulations generally equilibrate on the time scales of tens to hundreds of nanoseconds. The equilibration is the slowest in the GAFF simulation and the fastest in the Slipids simulation. Somewhat unexpectedly, the equilibration in the united-atom force fields is generally slower than in the all-atom force fields. Overall, there is a clear separation between the more variable previous generation force fields and significantly more similar new generation force fields (CHARMM36, Lipid14, Slipids). We expect that the presented approaches will be useful for quantitative analysis of conformations and dynamics of individual lipid molecules in other simulations of lipid bilayers.

  5. Parametrization of Backbone Flexibility in a Coarse-Grained Force Field for Proteins (COFFDROP) Derived from All-Atom Explicit-Solvent Molecular Dynamics Simulations of All Possible Two-Residue Peptides.

    PubMed

    Frembgen-Kesner, Tamara; Andrews, Casey T; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A; Jain, Aakash; Olayiwola, Oluwatoni J; Weishaar, Mitch R; Elcock, Adrian H

    2015-05-12

    Recently, we reported the parametrization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral, and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral, and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downward in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multidomain proteins connected by flexible linkers.

  6. Fixed-Charge Atomistic Force Fields for Molecular Dynamics Simulations in the Condensed Phase: An Overview.

    PubMed

    Riniker, Sereina

    2018-03-26

    In molecular dynamics or Monte Carlo simulations, the interactions between the particles (atoms) in the system are described by a so-called force field. The empirical functional form of classical fixed-charge force fields dates back to 1969 and remains essentially unchanged. In a fixed-charge force field, the polarization is not modeled explicitly, i.e. the effective partial charges do not change depending on conformation and environment. This simplification allows, however, a dramatic reduction in computational cost compared to polarizable force fields and in particular quantum-chemical modeling. The past decades have shown that simulations employing carefully parametrized fixed-charge force fields can provide useful insights into biological and chemical questions. This overview focuses on the four major force-field families, i.e. AMBER, CHARMM, GROMOS, and OPLS, which are based on the same classical functional form and are continuously improved to the present day. The overview is aimed at readers entering the field of (bio)molecular simulations. More experienced users may find the comparison and historical development of the force-field families interesting.

  7. Non-inertial calibration of vibratory gyroscopes

    NASA Technical Reports Server (NTRS)

    Gutierrez, Roman C. (Inventor); Tang, Tony K. (Inventor)

    2003-01-01

    The electrostatic elements already present in a vibratory gyroscope are used to simulate the Coriolis forces. An artificial electrostatic rotation signal is added to the closed-loop force rebalance system. Because the Coriolis force is at the same frequency as the artificial electrostatic force, the simulated force may be introduced into the system to perform an inertial test on MEMS vibratory gyroscopes without the use of a rotation table.

  8. Developing a molecular dynamics force field for both folded and disordered protein states.

    PubMed

    Robustelli, Paul; Piana, Stefano; Shaw, David E

    2018-05-07

    Molecular dynamics (MD) simulation is a valuable tool for characterizing the structural dynamics of folded proteins and should be similarly applicable to disordered proteins and proteins with both folded and disordered regions. It has been unclear, however, whether any physical model (force field) used in MD simulations accurately describes both folded and disordered proteins. Here, we select a benchmark set of 21 systems, including folded and disordered proteins, simulate these systems with six state-of-the-art force fields, and compare the results to over 9,000 available experimental data points. We find that none of the tested force fields simultaneously provided accurate descriptions of folded proteins, of the dimensions of disordered proteins, and of the secondary structure propensities of disordered proteins. Guided by simulation results on a subset of our benchmark, however, we modified parameters of one force field, achieving excellent agreement with experiment for disordered proteins, while maintaining state-of-the-art accuracy for folded proteins. The resulting force field, a99SB- disp , should thus greatly expand the range of biological systems amenable to MD simulation. A similar approach could be taken to improve other force fields. Copyright © 2018 the Author(s). Published by PNAS.

  9. Improved side-chain torsion potentials for the Amber ff99SB protein force field

    PubMed Central

    Lindorff-Larsen, Kresten; Piana, Stefano; Palmo, Kim; Maragakis, Paul; Klepeis, John L; Dror, Ron O; Shaw, David E

    2010-01-01

    Recent advances in hardware and software have enabled increasingly long molecular dynamics (MD) simulations of biomolecules, exposing certain limitations in the accuracy of the force fields used for such simulations and spurring efforts to refine these force fields. Recent modifications to the Amber and CHARMM protein force fields, for example, have improved the backbone torsion potentials, remedying deficiencies in earlier versions. Here, we further advance simulation accuracy by improving the amino acid side-chain torsion potentials of the Amber ff99SB force field. First, we used simulations of model alpha-helical systems to identify the four residue types whose rotamer distribution differed the most from expectations based on Protein Data Bank statistics. Second, we optimized the side-chain torsion potentials of these residues to match new, high-level quantum-mechanical calculations. Finally, we used microsecond-timescale MD simulations in explicit solvent to validate the resulting force field against a large set of experimental NMR measurements that directly probe side-chain conformations. The new force field, which we have termed Amber ff99SB-ILDN, exhibits considerably better agreement with the NMR data. Proteins 2010. © 2010 Wiley-Liss, Inc. PMID:20408171

  10. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    NASA Technical Reports Server (NTRS)

    Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon

    2014-01-01

    This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.

  11. Mission Assignment Model and Simulation Tool for Different Types of Unmanned Aerial Vehicles

    DTIC Science & Technology

    2008-09-01

    TABLE OF ABBREVIATIONS AND ACRONYMS AAA Anti Aircraft Artillery ATO Air Tasking Order BDA Battle Damage Assessment DES Discrete Event Simulation...clock is advanced in small, fixed time steps. Since the value of simulated time is important in DES , an internal variable, called as simulation clock...VEHICLES Yücel Alver Captain, Turkish Air Force B.S., Turkish Air Force Academy, 2000 Murat Özdoğan 1st Lieutenant, Turkish Air Force B.S., Turkish

  12. The effect of force feedback on student reasoning about gravity, mass, force and motion

    NASA Astrophysics Data System (ADS)

    Bussell, Linda

    The purpose of this study was to examine whether force feedback within a computer simulation had an effect on reasoning by fifth grade students about gravity, mass, force, and motion, concepts which can be difficult for learners to grasp. Few studies have been done on cognitive learning and haptic feedback, particularly with young learners, but there is an extensive base of literature on children's conceptions of science and a number of studies focus specifically on children's conceptions of force and motion. This case study used a computer-based paddleball simulation with guided inquiry as the primary stimulus. Within the simulation, the learner could adjust the mass of the ball and the gravitational force. The experimental group used the simulation with visual and force feedback; the control group used the simulation with visual feedback but without force feedback. The proposition was that there would be differences in reasoning between the experimental and control groups, with force feedback being helpful with concepts that are more obvious when felt. Participants were 34 fifth-grade students from three schools. Students completed a modal (visual, auditory, and haptic) learning preference assessment and a pretest. The sessions, including participant experimentation and interviews, were audio recorded and observed. The interviews were followed by a written posttest. These data were analyzed to determine whether there were differences based on treatment, learning style, demographics, prior gaming experience, force feedback experience, or prior knowledge. Work with the simulation, regardless of group, was found to increase students' understanding of key concepts. The experimental group appeared to benefit from the supplementary help that force feedback provided. Those in the experimental group scored higher on the posttest than those in the control group. The greatest difference between mean group scores was on a question concerning the effects of increased gravitational force.

  13. A grouping method based on grid density and relationship for crowd evacuation simulation

    NASA Astrophysics Data System (ADS)

    Li, Yan; Liu, Hong; Liu, Guang-peng; Li, Liang; Moore, Philip; Hu, Bin

    2017-05-01

    Psychological factors affect the movement of people in the competitive or panic mode of evacuation, in which the density of pedestrians is relatively large and the distance among them is small. In this paper, a crowd is divided into groups according to their social relations to simulate the actual movement of crowd evacuation more realistically and increase the attractiveness of the group based on social force model. The force of group attraction is the synthesis of two forces; one is the attraction of the individuals generated by their social relations to gather, and the other is that of the group leader to the individuals within the group to ensure that the individuals follow the leader. The synthetic force determines the trajectory of individuals. The evacuation process is demonstrated using the improved social force model. In the improved social force model, the individuals with close social relations gradually present a closer and coordinated action while following the leader. In this paper, a grouping algorithm is proposed based on grid density and relationship via computer simulation to illustrate the features of the improved social force model. The definition of the parameters involved in the algorithm is given, and the effect of relational value on the grouping is tested. Reasonable numbers of grids and weights are selected. The effectiveness of the algorithm is shown through simulation experiments. A simulation platform is also established using the proposed grouping algorithm and the improved social force model for crowd evacuation simulation.

  14. EDITORIAL: Special issue on green photonics Special issue on green photonics

    NASA Astrophysics Data System (ADS)

    Boardman, Allan; Brongersma, Mark; Polman, Albert

    2012-02-01

    Photovoltaic (PV) cells can provide virtually unlimited amounts of energy by effectively converting sunlight into clean electrical power. Over the years, significant research and development efforts have been devoted to improving the structural and charge transport properties of the materials used in PV cells. Despite these efforts, the current energy conversion efficiencies of commercial solar cells are still substantially lower than the ultimate limits set by thermodynamics. Economic arguments in addition to the scarcity of some semiconductors and materials used in transparent conductive oxides are also driving us to use less and less material in a cell. For these reasons, it is clear that new approaches need to be found. One possible solution that is more-or-less orthogonal to previous approaches is aimed at managing the photons rather than the electrons or atoms in a cell. This type of photon management is termed Green Photonics. Nano- and micro-photonic trapping techniques are currently gaining significant attention. The use of engineered plasmonic and high refractive index structures shows tremendous potential for enhancing the light absorption per unit volume in semiconductors. Unfortunately, the design space in terms of the nanostructure sizes, shapes, and array structures is too large to allow for optimization of PV cells using brute force simulations. For this reason, new intuitive models and rapid optimization techniques for advanced light trapping technologies need to be developed. At the same time we need to come up with new, inexpensive, and scalable nanostructure fabrication and optical characterization techniques in order to realize the dream of inexpensive, high power conversion efficiency cells that make economic sense. This special issue discusses some of the exciting new approaches to light trapping that leverage the most recent advances in the field of nanophotonics. It also provides some insights into why giving the green light to green photonics may help play a role in resolving the pending energy crisis.The papers included in this `green photonics' special issue demonstrate current global activity, involving a wide range of distinguished authors.

  15. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  16. Improved Pyrolysis Micro reactor Design via Computational Fluid Dynamics Simulations

    DTIC Science & Technology

    2017-05-23

    Dynamics Simulations Ghanshyam L. Vaghjiani Air Force Research Laboratory (AFMC) AFRL/RQRS 1 Ara Drive Edwards AFB, CA 93524-7013 Air Force...Aerospace Systems Directorate Air Force Research Laboratory AFRL/RQRS 1 Ara Road Edwards AFB, CA 93524 *Email: ghanshyam.vaghjiani@us.af.mil IMPROVED...PYROLYSIS MICRO-REACTOR DESIGN VIA COMPUTATIONAL FLUID DYNAMICS SIMULATIONS Ghanshyam L. Vaghjiani* DISTRIBUTION A: Approved for public release

  17. Force probe simulations of a reversibly rebinding system: Impact of pulling device stiffness

    NASA Astrophysics Data System (ADS)

    Jaschonek, Stefan; Diezemann, Gregor

    2017-03-01

    We present a detailed study of the parameter dependence of force probe molecular dynamics (FPMD) simulations. Using a well studied calix[4]arene catenane dimer as a model system, we systematically vary the pulling velocity and the stiffness of the applied external potential. This allows us to investigate how the results of pulling simulations operating in the constant velocity mode (force-ramp mode) depend on the details of the simulation setup. The system studied has the further advantage of showing reversible rebinding meaning that we can monitor the opening and the rebinding transition. Many models designed to extract kinetic information from rupture force distributions work in the limit of soft springs and all quantities are found to depend solely on the so-called loading rate, the product of spring stiffness and pulling velocity. This approximation is known to break down when stiff springs are used, a situation often encountered in molecular simulations. We find that while some quantities only depend on the loading rate, others show an explicit dependence on the spring constant used in the FPMD simulation. In particular, the force versus extension curves show an almost stiffness independent rupture force but the force jump after the rupture transition does depend roughly linearly on the value of the stiffness. The kinetic rates determined from the rupture force distributions show a dependence on the stiffness that can be understood in terms of the corresponding dependence of the characteristic forces alone. These dependencies can be understood qualitatively in terms of a harmonic model for the molecular free energy landscape. It appears that the pulling velocities employed are so large that the crossover from activated dynamics to diffusive dynamics takes place on the time scale of our simulations. We determine the effective distance of the free energy minima of the closed and the open configurations of the system from the barrier via an analysis of the hydrogen-bond network with results in accord with earlier simulations. We find that the system is quite brittle in the force regime monitored in the sense that the barrier is located near to the closed state.

  18. Force field development with GOMC, a fast new Monte Carlo molecular simulation code

    NASA Astrophysics Data System (ADS)

    Mick, Jason Richard

    In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.

  19. Thermodynamic forces in coarse-grained simulations

    NASA Astrophysics Data System (ADS)

    Noid, William

    Atomically detailed molecular dynamics simulations have profoundly advanced our understanding of the structure and interactions in soft condensed phases. Nevertheless, despite dramatic advances in the methodology and resources for simulating atomically detailed models, low-resolution coarse-grained (CG) models play a central and rapidly growing role in science. CG models not only empower researchers to investigate phenomena beyond the scope of atomically detailed simulations, but also to precisely tailor models for specific phenomena. However, in contrast to atomically detailed simulations, which evolve on a potential energy surface, CG simulations should evolve on a free energy surface. Therefore, the forces in CG models should reflect the thermodynamic information that has been eliminated from the CG configuration space. As a consequence of these thermodynamic forces, CG models often demonstrate limited transferability and, moreover, rarely provide an accurate description of both structural and thermodynamic properties. In this talk, I will present a framework that clarifies the origin and impact of these thermodynamic forces. Additionally, I will present computational methods for quantifying these forces and incorporating their effects into CG MD simulations. As time allows, I will demonstrate applications of this framework for liquids, polymers, and interfaces. We gratefully acknowledge the support of the National Science Foundation via CHE 1565631.

  20. Quantifying atmospheric pollutant emissions from open biomass burning with multiple methods: a case study for Yangtze River Delta region, China

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Zhao, Y.

    2017-12-01

    To understand the differences and their origins of emission inventories based on various methods for the source, emissions of PM10, PM2.5, OC, BC, CH4, VOCs, CO, CO2, NOX, SO2 and NH3 from open biomass burning (OBB) in Yangtze River Delta (YRD) are calculated for 2005-2012 using three (bottom-up, FRP-based and constraining) approaches. The inter-annual trends in emissions with FRP-based and constraining methods are similar with the fire counts in 2005-2012, while that with bottom-up method is different. For most years, emissions of all species estimated with constraining method are smaller than those with bottom-up method (except for VOCs), while they are larger than those with FRP-based (except for EC, CH4 and NH3). Such discrepancies result mainly from different masses of crop residues burned in the field (CRBF) estimated in the three methods. Among the three methods, the simulated concentrations from chemistry transport modeling with the constrained emissions are the closest to available observations, implying the result from constraining method is the best estimation for OBB emissions. CO emissions in the three methods are compared with other studies. Similar temporal variations were found for the constrained emissions, FRP-based emissions, GFASv1.0 and GFEDv4.1s, with the largest and the lowest emissions estimated for 2012 and 2006, respectively. The constrained CO emissions in this study are smaller than those in other studies based on bottom-up method and larger than those based on burned area and FRP derived from satellite. The contributions of OBB to two particulate pollution events in 2010 and 2012 are analyzed with the brute-force method. The average contribution of OBB to PM10 mass concentrations in June 8-14 2012 was estimated at 38.9% (74.8 μg m-3), larger than that in June 17-24, 2010 at 23.6 % (38.5 μg m-3). Influences of diurnal curves and meteorology on air pollution caused by OBB are also evaluated, and the results suggest that air pollution caused by OBB will become heavier if the meteorological conditions are unfavorable, and that more attention should be paid to the supervision in night. Quantified with the Monte-Carlo simulation, the uncertainties of OBB emissions with constraining method are significantly lower than those with bottom-up or FRP-based methods.

  1. Dynamical diagnostics of the SST annual cycle in the eastern equatorial Pacific: Part II analysis of CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Ying; Jin, Fei-Fei

    2017-12-01

    In this study, a simple coupled framework established in Part I is utilized to investigate inter-model diversity in simulating the equatorial Pacific SST annual cycle (SSTAC). It demonstrates that the simulated amplitude and phase characteristics of SSTAC in models are controlled by two internal dynamical factors (the damping rate and phase speed) and two external forcing factors (the strength of the annual and semi-annual harmonic forcing). These four diagnostic factors are further condensed into a dynamical response factor and a forcing factor to derive theoretical solutions of amplitude and phase of SSTAC. The theoretical solutions are in remarkable agreement with observations and CMIP5 simulations. The great diversity in the simulated SSTACs is related to the spreads in these dynamic and forcing factors. Most models tend to simulate a weak SSTAC, due to their weak damping rate and annual harmonic forcing. The latter is due to bias in the meridional asymmetry of the annual mean state of the tropical Pacific, represented by the weak cross-equatorial winds in the cold tongue region.

  2. Neural control of muscle force: indications from a simulation model

    PubMed Central

    Luca, Carlo J. De

    2013-01-01

    We developed a model to investigate the influence of the muscle force twitch on the simulated firing behavior of motoneurons and muscle force production during voluntary isometric contractions. The input consists of an excitatory signal common to all the motor units in the pool of a muscle, consistent with the “common drive” property. Motor units respond with a hierarchically structured firing behavior wherein at any time and force, firing rates are inversely proportional to recruitment threshold, as described by the “onion skin” property. Time- and force-dependent changes in muscle force production are introduced by varying the motor unit force twitches as a function of time or by varying the number of active motor units. A force feedback adjusts the input excitation, maintaining the simulated force at a target level. The simulations replicate motor unit behavior characteristics similar to those reported in previous empirical studies of sustained contractions: 1) the initial decrease and subsequent increase of firing rates, 2) the derecruitment and recruitment of motor units throughout sustained contractions, and 3) the continual increase in the force fluctuation caused by the progressive recruitment of larger motor units. The model cautions the use of motor unit behavior at recruitment and derecruitment without consideration of changes in the muscle force generation capacity. It describes an alternative mechanism for the reserve capacity of motor units to generate extraordinary force. It supports the hypothesis that the control of motoneurons remains invariant during force-varying and sustained isometric contractions. PMID:23236008

  3. Parameterization of backbone flexibility in a coarse-grained force field for proteins (COFFDROP) derived from all-atom explicit-solvent molecular dynamics simulations of all possible two-residue peptides

    PubMed Central

    Frembgen-Kesner, Tamara; Andrews, Casey T.; Li, Shuxiang; Ngo, Nguyet Anh; Shubert, Scott A.; Jain, Aakash; Olayiwola, Oluwatoni; Weishaar, Mitch R.; Elcock, Adrian H.

    2015-01-01

    Recently, we reported the parameterization of a set of coarse-grained (CG) nonbonded potential functions, derived from all-atom explicit-solvent molecular dynamics (MD) simulations of amino acid pairs, and designed for use in (implicit-solvent) Brownian dynamics (BD) simulations of proteins; this force field was named COFFDROP (COarse-grained Force Field for Dynamic Representations Of Proteins). Here, we describe the extension of COFFDROP to include bonded backbone terms derived from fitting to results of explicit-solvent MD simulations of all possible two-residue peptides containing the 20 standard amino acids, with histidine modeled in both its protonated and neutral forms. The iterative Boltzmann inversion (IBI) method was used to optimize new CG potential functions for backbone-related terms by attempting to reproduce angle, dihedral and distance probability distributions generated by the MD simulations. In a simple test of the transferability of the extended force field, the angle, dihedral and distance probability distributions obtained from BD simulations of 56 three-residue peptides were compared to results from corresponding explicit-solvent MD simulations. In a more challenging test of the COFFDROP force field, it was used to simulate eight intrinsically disordered proteins and was shown to quite accurately reproduce the experimental hydrodynamic radii (Rhydro), provided that the favorable nonbonded interactions of the force field were uniformly scaled downwards in magnitude. Overall, the results indicate that the COFFDROP force field is likely to find use in modeling the conformational behavior of intrinsically disordered proteins and multi-domain proteins connected by flexible linkers. PMID:26574429

  4. International benchmarking of longitudinal train dynamics simulators: results

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon

    2018-03-01

    This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).

  5. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  6. The role of historical forcings in simulating the observed Atlantic Multidecadal Oscillation

    NASA Astrophysics Data System (ADS)

    Goes, L. M.; Cane, M. A.; Bellomo, K.; Clement, A. C.

    2016-12-01

    The variation in basin-wide North Atlantic sea surface temperatures (SST), known as the Atlantic multidecadal oscillation (AMO), affects climate throughout the Northern Hemisphere and tropics, yet the forcing mechanisms are not fully understood. Here, we analyze the AMO in the Coupled Model Intercomparison Project phase 5 (CMIP5) Pre-industrial (PI) and Historical (HIST) simulations to determine the role of historical climate forcings in producing the observed 20th century shifts in the AMO (OBS, 1865-2005). We evaluate whether the agreement between models and observations is better with historical forcings or without forcing - i.e. due to processes internal to the climate system, such as the Atlantic Meridional Overturning Circulation (AMOC). To do this we draw 141-year samples from 38 CMIP5 PI runs and compare the correlation between the PI and HIST AMO to the observed AMO. We find that in the majority of models (24 out of 38), it is very unlikely (less than 10% chance) that the unforced simulations produce agreement with observations that are as high as the forced simulations. We also compare the amplitude of the simulated AMO and find that 87% of models produce multi-decadal variance in the AMO with historical forcings that is very likely higher than without forcing, but most models underestimate the variance of the observed AMO. This indicates that over the 20th century external rather than internal forcing was crucial in setting the pace, phase and amplitude of the AMO.

  7. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  8. A Phenomenological Model and Validation of Shortening Induced Force Depression during Muscle Contractions

    PubMed Central

    McGowan, C.P.; Neptune, R.R.; Herzog, W.

    2009-01-01

    History dependent effects on muscle force development following active changes in length have been measured in a number of experimental studies. However, few muscle models have included these properties or examined their impact on force and power output in dynamic cyclic movements. The goal of this study was to develop and validate a modified Hill-type muscle model that includes shortening induced force depression and assess its influence on locomotor performance. The magnitude of force depression was defined by empirical relationships based on muscle mechanical work. To validate the model, simulations incorporating force depression were developed to emulate single muscle in situ and whole muscle group leg extension experiments. There was excellent agreement between simulation and experimental values, with in situ force patterns closely matching the experimental data (average RMS error < 1.5 N) and force depression in the simulated leg extension exercise being similar in magnitude to experimental values (6.0% vs 6.5%, respectively). To examine the influence of force depression on locomotor performance, simulations of maximum power pedaling with and without force depression were generated. Force depression decreased maximum crank power by 20% – 40%, depending on the relationship between force depression and muscle work used. These results indicate that force depression has the potential to substantially influence muscle power output in dynamic cyclic movements. However, to fully understand the impact of this phenomenon on human movement, more research is needed to characterize the relationship between force depression and mechanical work in large muscles with different morphologies. PMID:19879585

  9. Compressibility Effects on Particle-Fluid Interaction Force for Eulerian-Eulerian Simulations

    NASA Astrophysics Data System (ADS)

    Akiki, Georges; Francois, Marianne; Zhang, Duan

    2017-11-01

    Particle-fluid interaction forces are essential in modeling multiphase flows. Several models can be found in the literature based on empirical, numerical, and experimental results from various simplified flow conditions. Some of these models also account for finite Mach number effects. Using these models is relatively straightforward with Eulerian-Lagrangian calculations if the model for the total force on particles is used. In Eulerian-Eulerian simulations, however, there is the pressure gradient terms in the momentum equation for particles. For low Mach number flows, the pressure gradient force is negligible if the particle density is much greater than that of the fluid. For supersonic flows where a standing shock is present, even for a steady and uniform flow, it is unclear whether the significant pressure-gradient force should to be separated out from the particle force model. To answer this conceptual question, we perform single-sphere fully-resolved DNS simulations for a wide range of Mach numbers. We then examine whether the total force obtained from the DNS can be categorized into well-established models, such as the quasi-steady, added-mass, pressure-gradient, and history forces. Work sponsored by Advanced Simulation and Computing (ASC) program of NNSA and LDRD-CNLS of LANL.

  10. Formation of well-mixed warm water column in central Bohai Sea during summer: Role of high-frequency atmospheric forcing

    NASA Astrophysics Data System (ADS)

    Ma, Weiwei; Wan, Xiuquan; Wang, Zhankun; Liu, Yulong; Wan, Kai

    2017-12-01

    The influence of high-frequency atmospheric forcing on the formation of a well-mixed summer warm water column in the central Bohai Sea is investigated comparing model simulations driven by daily surface forcing and those using monthly forcing data. In the absence of high-frequency atmospheric forcing, numerical simulations have repeatedly failed to reproduce this vertically uniform column of warm water measured over the past 35 years. However, high-frequency surface forcing is found to strongly influence the structure and distribution of the well-mixed warm water column, and simulations are in good agreement with observations. Results show that high frequency forcing enhances vertical mixing over the central bank, intensifies downward heat transport, and homogenizes the water column to form the Bohai central warm column. Evidence presented shows that high frequency forcing plays a dominant role in the formation of the well-mixed warm water column in summer, even without the effects of tidal and surface wave mixing. The present study thus provides a practical and rational way of further improving the performance of oceanic simulations in the Bohai Sea and can be used to adjust parameterization schemes of ocean models.

  11. A model of muscle contraction based on the Langevin equation with actomyosin potentials.

    PubMed

    Tamura, Youjiro; Ito, Akira; Saito, Masami

    2017-02-01

    We propose a muscle contraction model that is essentially a model of the motion of myosin motors as described by a Langevin equation. This model involves one-dimensional numerical calculations wherein the total force is the sum of a viscous force proportional to the myosin head velocity, a white Gaussian noise produced by random forces and other potential forces originating from the actomyosin structure and intra-molecular charges. We calculate the velocity of a single myosin on an actin filament to be 4.9-49 μm/s, depending on the viscosity between the actomyosin molecules. A myosin filament with a hundred myosin heads is used to simulate the contractions of a half-sarcomere within the skeletal muscle. The force response due to a quick release in the isometric contraction is simulated using a process wherein crossbridges are changed forcibly from one state to another. In contrast, the force response to a quick stretch is simulated using purely mechanical characteristics. We simulate the force-velocity relation and energy efficiency in the isotonic contraction and adenosine triphosphate consumption. The simulation results are in good agreement with the experimental results. We show that the Langevin equation for the actomyosin potentials can be modified statistically to become an existing muscle model that uses Maxwell elements.

  12. Accuracy of buffered-force QM/MM simulations of silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peguiron, Anke; Moras, Gianpietro; Colombi Ciacchi, Lucio

    2015-02-14

    We report comparisons between energy-based quantum mechanics/molecular mechanics (QM/MM) and buffered force-based QM/MM simulations in silica. Local quantities—such as density of states, charges, forces, and geometries—calculated with both QM/MM approaches are compared to the results of full QM simulations. We find the length scale over which forces computed using a finite QM region converge to reference values obtained in full quantum-mechanical calculations is ∼10 Å rather than the ∼5 Å previously reported for covalent materials such as silicon. Electrostatic embedding of the QM region in the surrounding classical point charges gives only a minor contribution to the force convergence. Whilemore » the energy-based approach provides accurate results in geometry optimizations of point defects, we find that the removal of large force errors at the QM/MM boundary provided by the buffered force-based scheme is necessary for accurate constrained geometry optimizations where Si–O bonds are elongated and for finite-temperature molecular dynamics simulations of crack propagation. Moreover, the buffered approach allows for more flexibility, since special-purpose QM/MM coupling terms that link QM and MM atoms are not required and the region that is treated at the QM level can be adaptively redefined during the course of a dynamical simulation.« less

  13. Experiments evaluating compliance and force feedback effect on manipulator performance

    NASA Technical Reports Server (NTRS)

    Kugath, D. A.

    1972-01-01

    The performance capability was assessed of operators performing simulated space tasks using manipulator systems which had compliance and force feedback varied. Two manipulators were used, the E-2 electromechanical man-equivalent (force, reach, etc.) master-slave system and a modified CAM 1400 hydraulic master-slave with 100 lbs force capability at reaches of 24 ft. The CAM 1400 was further modified to operate without its normal force feedback. Several experiments and simulations were performed. The first two involved the E-2 absorbing the energy of a moving mass and secondly, guiding a mass thru a maze. Thus, both work and self paced tasks were studied as servo compliance was varied. Three simulations were run with the E-2 mounted on the CAM 1400 to evaluate the concept of a dexterous manipulator as an end effector of a boom-manipulator. Finally, the CAM 1400 performed a maze test and also simulated the capture of a large mass as the servo compliance was varied and with force feedback included and removed.

  14. Approaching a realistic force balance in geodynamo simulations

    PubMed Central

    Yadav, Rakesh K.; Gastine, Thomas; Christensen, Ulrich R.; Wolk, Scott J.; Poppenhaeger, Katja

    2016-01-01

    Earth sustains its magnetic field by a dynamo process driven by convection in the liquid outer core. Geodynamo simulations have been successful in reproducing many observed properties of the geomagnetic field. However, although theoretical considerations suggest that flow in the core is governed by a balance between Lorentz force, rotational force, and buoyancy (called MAC balance for Magnetic, Archimedean, Coriolis) with only minute roles for viscous and inertial forces, dynamo simulations must use viscosity values that are many orders of magnitude larger than in the core, due to computational constraints. In typical geodynamo models, viscous and inertial forces are not much smaller than the Coriolis force, and the Lorentz force plays a subdominant role; this has led to conclusions that these simulations are viscously controlled and do not represent the physics of the geodynamo. Here we show, by a direct analysis of the relevant forces, that a MAC balance can be achieved when the viscosity is reduced to values close to the current practical limit. Lorentz force, buoyancy, and the uncompensated (by pressure) part of the Coriolis force are of very similar strength, whereas viscous and inertial forces are smaller by a factor of at least 20 in the bulk of the fluid volume. Compared with nonmagnetic convection at otherwise identical parameters, the dynamo flow is of larger scale and is less invariant parallel to the rotation axis (less geostrophic), and convection transports twice as much heat, all of which is expected when the Lorentz force strongly influences the convection properties. PMID:27790991

  15. A design of hardware haptic interface for gastrointestinal endoscopy simulation.

    PubMed

    Gu, Yunjin; Lee, Doo Yong

    2011-01-01

    Gastrointestinal endoscopy simulations have been developed to train endoscopic procedures which require hundreds of practices to be competent in the skills. Even though realistic haptic feedback is important to provide realistic sensation to the user, most of previous simulations including commercialized simulation have mainly focused on providing realistic visual feedback. In this paper, we propose a novel design of portable haptic interface, which provides 2DOF force feedback, for the gastrointestinal endoscopy simulation. The haptic interface consists of translational and rotational force feedback mechanism which are completely decoupled, and gripping mechanism for controlling connection between the endoscope and the force feedback mechanism.

  16. Application of the LEPS technique for Quantitative Precipitation Forecasting (QPF) in Southern Italy: a preliminary study

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.

    2006-03-01

    This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.

  17. Molecular dynamics simulations of fluid cyclopropane with MP2/CBS-fitted intermolecular interaction potentials

    NASA Astrophysics Data System (ADS)

    Ho, Yen-Ching; Wang, Yi-Siang; Chao, Sheng D.

    2017-08-01

    Modeling fluid cycloalkanes with molecular dynamics simulations has proven to be a very challenging task partly because of lacking a reliable force field based on quantum chemistry calculations. In this paper, we construct an ab initio force field for fluid cyclopropane using the second-order Møller-Plesset perturbation theory. We consider 15 conformers of the cyclopropane dimer for the orientation sampling. Single-point energies at important geometries are calibrated by the coupled cluster with single, double, and perturbative triple excitation method. Dunning's correlation consistent basis sets (up to aug-cc-pVTZ) are used in extrapolating the interaction energies at the complete basis set limit. The force field parameters in a 9-site Lennard-Jones model are regressed by the calculated interaction energies without using empirical data. With this ab initio force field, we perform molecular dynamics simulations of fluid cyclopropane and calculate both the structural and dynamical properties. We compare the simulation results with those using an empirical force field and obtain a quantitative agreement for the detailed atom-wise radial distribution functions. The experimentally observed gross radial distribution function (extracted from the neutron scattering measurements) is well reproduced in our simulation. Moreover, the calculated self-diffusion coefficients and shear viscosities are in good agreement with the experimental data over a wide range of thermodynamic conditions. To the best of our knowledge, this is the first ab initio force field which is capable of competing with empirical force fields for simulating fluid cyclopropane.

  18. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  19. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  20. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  1. Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.

    PubMed

    Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David

    2008-07-01

    Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.

  2. Emission Sectoral Contributions of Foreign Emissions to Particulate Matter Concentrations over South Korea

    NASA Astrophysics Data System (ADS)

    Kim, E.; Kim, S.; Kim, H. C.; Kim, B. U.; Cho, J. H.; Woo, J. H.

    2017-12-01

    In this study, we investigated the contributions of major emission source categories located upwind of South Korea to Particulate Matter (PM) in South Korea. In general, air quality in South Korea is affected by anthropogenic air pollutants emitted from foreign countries including China. Some studies reported that foreign emissions contributed 50 % of annual surface PM total mass concentrations in the Seoul Metropolitan Area, South Korea in 2014. Previous studies examined PM contributions of foreign emissions from all sectors considering meteorological variations. However, little studies conducted to assess contributions of specific foreign source categories. Therefore, we attempted to estimate sectoral contributions of foreign emissions from China to South Korea PM using our air quality forecasting system. We used Model Inter-Comparison Study in Asia 2010 for foreign emissions and Clean Air Policy Support System 2010 emission inventories for domestic emissions. To quantify contributions of major emission sectors to South Korea PM, we applied the Community Multi-scale Air Quality system with brute force method by perturbing emissions from industrial, residential, fossil-fuel power plants, transportation, and agriculture sectors in China. We noted that industrial sector was pre-dominant over the region except during cold season for primary PMs when residential emissions drastically increase due to heating demand. This study will benefit ensemble air quality forecasting and refined control strategy design by providing quantitative assessment on seasonal contributions of foreign emissions from major source categories.

  3. A Survey of Image Encryption Algorithms

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender; Sardana, Pranshul

    2017-12-01

    Security of data/images is one of the crucial aspects in the gigantic and still expanding domain of digital transfer. Encryption of images is one of the well known mechanisms to preserve confidentiality of images over a reliable unrestricted public media. This medium is vulnerable to attacks and hence efficient encryption algorithms are necessity for secure data transfer. Various techniques have been proposed in literature till date, each have an edge over the other, to catch-up to the ever growing need of security. This paper is an effort to compare the most popular techniques available on the basis of various performance metrics like differential, statistical and quantitative attacks analysis. To measure the efficacy, all the modern and grown-up techniques are implemented in MATLAB-2015. The results show that the chaotic schemes used in the study provide highly scrambled encrypted images having uniform histogram distribution. In addition, the encrypted images provided very less degree of correlation coefficient values in horizontal, vertical and diagonal directions, proving their resistance against statistical attacks. In addition, these schemes are able to resist differential attacks as these showed a high sensitivity for the initial conditions, i.e. pixel and key values. Finally, the schemes provide a large key spacing, hence can resist the brute force attacks, and provided a very less computational time for image encryption/decryption in comparison to other schemes available in literature.

  4. Saturn Apollo Program

    NASA Image and Video Library

    1965-04-16

    This photograph depicts a dramatic view of the first test firing of all five F-1 engines for the Saturn V S-IC stage at the Marshall Space Flight Center. The testing lasted a full duration of 6.5 seconds. It also marked the first test performed in the new S-IC static test stand and the first test using the new control blockhouse. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. Required to hold down the brute force of a 7,500,000-pound thrust, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900 ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minute.

  5. Heterozygote PCR product melting curve prediction.

    PubMed

    Dwight, Zachary L; Palais, Robert; Kent, Jana; Wittwer, Carl T

    2014-03-01

    Melting curve prediction of PCR products is limited to perfectly complementary strands. Multiple domains are calculated by recursive nearest neighbor thermodynamics. However, the melting curve of an amplicon containing a heterozygous single-nucleotide variant (SNV) after PCR is the composite of four duplexes: two matched homoduplexes and two mismatched heteroduplexes. To better predict the shape of composite heterozygote melting curves, 52 experimental curves were compared with brute force in silico predictions varying two parameters simultaneously: the relative contribution of heteroduplex products and an ionic scaling factor for mismatched tetrads. Heteroduplex products contributed 25.7 ± 6.7% to the composite melting curve, varying from 23%-28% for different SNV classes. The effect of ions on mismatch tetrads scaled to 76%-96% of normal (depending on SNV class) and averaged 88 ± 16.4%. Based on uMelt (www.dna.utah.edu/umelt/umelt.html) with an expanded nearest neighbor thermodynamic set that includes mismatched base pairs, uMelt HETS calculates helicity as a function of temperature for homoduplex and heteroduplex products, as well as the composite curve expected from heterozygotes. It is an interactive Web tool for efficient genotyping design, heterozygote melting curve prediction, and quality control of melting curve experiments. The application was developed in Actionscript and can be found online at http://www.dna.utah.edu/hets/. © 2013 WILEY PERIODICALS, INC.

  6. Comparison of two laryngeal tissue fiber constitutive models

    NASA Astrophysics Data System (ADS)

    Hunter, Eric J.; Palaparthi, Anil Kumar Reddy; Siegmund, Thomas; Chan, Roger W.

    2014-02-01

    Biological tissues are complex time-dependent materials, and the best choice of the appropriate time-dependent constitutive description is not evident. This report reviews two constitutive models (a modified Kelvin model and a two-network Ogden-Boyce model) in the characterization of the passive stress-strain properties of laryngeal tissue under tensile deformation. The two models are compared, as are the automated methods for parameterization of tissue stress-strain data (a brute force vs. a common optimization method). Sensitivity (error curves) of parameters from both models and the optimized parameter set are calculated and contrast by optimizing to the same tissue stress-strain data. Both models adequately characterized empirical stress-strain datasets and could be used to recreate a good likeness of the data. Nevertheless, parameters in both models were sensitive to measurement errors or uncertainties in stress-strain, which would greatly hinder the confidence in those parameters. The modified Kelvin model emerges as a potential better choice for phonation models which use a tissue model as one component, or for general comparisons of the mechanical properties of one type of tissue to another (e.g., axial stress nonlinearity). In contrast, the Ogden-Boyce model would be more appropriate to provide a basic understanding of the tissue's mechanical response with better insights into the tissue's physical characteristics in terms of standard engineering metrics such as shear modulus and viscosity.

  7. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  8. The evolution of parental care in insects: A test of current hypotheses.

    PubMed

    Gilbert, James D J; Manica, Andrea

    2015-05-01

    Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced-for example, by caring for multiple broods-but rejecting the "enhanced fecundity" hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. © 2015 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  9. Strategies for resolving conflict: their functional and dysfunctional sides.

    PubMed

    Stimac, M

    1982-01-01

    Conflict in the workplace can have a beneficial effect. That is if appropriately resolved, it plays an important part in effective problem solving, according to author Michele Stimac, associate dean, curriculum and instruction, and professor at Pepperdine University Graduate School of Education and Psychology. She advocates confrontation--by way of negotiation rather than brute force--as the best way to resolve conflict, heal wounds, reconcile the parties involved, and give the resolution long life. But she adds that if a person who has though through when, where, and how to confront someone foresees only disaster, avoidance is the best path to take. The emphasis here is on strategy. Avoiding confrontation, for example, is not a strategic move unless it is backed by considered judgment. Stimac lays out these basic tenets for engaging in sound negotiation: (1) The confrontation should take place in neutral territory. (2) The parties should actively listen to each other. (3) Each should assert his or her right to fair treatment. (4) Each must allow the other to retain his or her dignity. (5) The parties should seek a consensus on the issues inconflict, their resolution, and the means of reducing any tension that results from the resolution. (6) The parties should exhibit a spirit of give and take--that is, of compromise. (7) They should seek satisfaction for all involved.

  10. Detecting rare, abnormally large grains by x-ray diffraction

    DOE PAGES

    Boyce, Brad L.; Furnish, Timothy Allen; Padilla, H. A.; ...

    2015-07-16

    Bimodal grain structures are common in many alloys, arising from a number of different causes including incomplete recrystallization and abnormal grain growth. These bimodal grain structures have important technological implications, such as the well-known Goss texture which is now a cornerstone for electrical steels. Yet our ability to detect bimodal grain distributions is largely confined to brute force cross-sectional metallography. The present study presents a new method for rapid detection of unusually large grains embedded in a sea of much finer grains. Traditional X-ray diffraction-based grain size measurement techniques such as Scherrer, Williamson–Hall, or Warren–Averbach rely on peak breadth andmore » shape to extract information regarding the average crystallite size. However, these line broadening techniques are not well suited to identify a very small fraction of abnormally large grains. The present method utilizes statistically anomalous intensity spikes in the Bragg peak to identify regions where abnormally large grains are contributing to diffraction. This needle-in-a-haystack technique is demonstrated on a nanocrystalline Ni–Fe alloy which has undergone fatigue-induced abnormal grain growth. In this demonstration, the technique readily identifies a few large grains that occupy <0.00001 % of the interrogation volume. Finally, while the technique is demonstrated in the current study on nanocrystalline metal, it would likely apply to any bimodal polycrystal including ultrafine grained and fine microcrystalline materials with sufficiently distinct bimodal grain statistics.« less

  11. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

  12. A new feature detection mechanism and its application in secured ECG transmission with noise masking.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim

    2009-04-01

    With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.

  13. RCNP Project on Polarized {sup 3}He Ion Sources - From Optical Pumping to Cryogenic Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, M.; Inomata, T.; Takahashi, Y.

    2009-08-04

    A polarized {sup 3}He ion source has been developed at RCNP for intermediate and high energy spin physics. Though we started with an OPPIS (Optical Pumping Polarized Ion Source), it could not provide highly polarized {sup 3}He beam because of fundamental difficulties. Subsequently to this unhappy result, we examined novel types of the polarized {sup 3}He ion source, i.e., EPPIS (Electron Pumping Polarized Ion Source), and ECRPIS (ECR Polarized Ion Source) experimentally or theoretically, respectively. However, attainable {sup 3}He polarization degrees and beam intensities were still insufficient for practical use. A few years later, we proposed a new idea formore » the polarized {sup 3}He ion source, SEPIS (Spin Exchange Polarized Ion Source) which is based on enhanced spin-exchange cross sections at low incident energies for {sup 3}He{sup +}+Rb, and its feasibility was experimentally examined.Recently, we started a project on polarized {sup 3}He gas generated by the brute force method with low temperature (approx4 mK) and strong magnetic field (approx17 T), and rapid melting of highly polarized solid {sup 3}He followed by gasification. When this project will be successful, highly polarized {sup 3}He gas will hopefully be used for a new type of the polarized {sup 3}He ion source.« less

  14. Contribution of regional-scale fire events to ozone and PM2.5 ...

    EPA Pesticide Factsheets

    Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas

  15. Reconstruction of piano hammer force from string velocity.

    PubMed

    Chaigne, Antoine

    2016-11-01

    A method is presented for reconstructing piano hammer forces through appropriate filtering of the measured string velocity. The filter design is based on the analysis of the pulses generated by the hammer blow and propagating along the string. In the five lowest octaves, the hammer force is reconstructed by considering two waves only: the incoming wave from the hammer and its first reflection at the front end. For the higher notes, four- or eight-wave schemes must be considered. The theory is validated on simulated string velocities by comparing imposed and reconstructed forces. The simulations are based on a nonlinear damped stiff string model previously developed by Chabassier, Chaigne, and Joly [J. Acoust. Soc. Am. 134(1), 648-665 (2013)]. The influence of absorption, dispersion, and amplitude of the string waves on the quality of the reconstruction is discussed. Finally, the method is applied to real piano strings. The measured string velocity is compared to the simulated velocity excited by the reconstructed force, showing a high degree of accuracy. A number of simulations are compared to simulated strings excited by a force derived from measurements of mass and acceleration of the hammer head. One application to an historic piano is also presented.

  16. Force field dependent solution properties of glycine oligomers

    PubMed Central

    Drake, Justin A.

    2015-01-01

    Molecular simulations can be used to study disordered polypeptide systems and to generate hypotheses on the underlying structural and thermodynamic mechanisms that govern their function. As the number of disordered protein systems investigated with simulations increase, it is important to understand how particular force fields affect the structural properties of disordered polypeptides in solution. To this end, we performed a comparative structural analysis of Gly3 and Gly10 in aqueous solution from all-atom, microsecond MD simulations using the CHARMM 27 (C27), CHARMM 36 (C36), and Amber ff12SB force fields. For each force field, Gly3 and Gly10 were simulated for at least 300 ns and 1 μs, respectively. Simulating oligoglycines of two different lengths allows us to evaluate how force field effects depend on polypeptide length. Using a variety of structural metrics (e.g. end-to-end distance, radius of gyration, dihedral angle distributions), we characterize the distribution of oligoglycine conformers for each force field and show that each sample conformation space differently, yielding considerably different structural tendencies of the same oligoglycine model in solution. Notably, we find that C36 samples more extended oligoglycine structures than both C27 and ff12SB. PMID:25952623

  17. The kinematics and kinetics of riding a racehorse: A quantitative comparison of a training simulator and real horses.

    PubMed

    Walker, A M; Applegate, C; Pfau, T; Sparkes, E L; Wilson, A M; Witte, T H

    2016-10-03

    Movement of a racehorse simulator differs to that of a real horse, but the effects of these differences on jockey technique have not been evaluated. We quantified and compared the kinematics and kinetics of jockeys during gallop riding on a simulator and real horses. Inertial measurement units were attached mid-shaft to the long bones of six jockeys and the sacrum of the horse or simulator. Instrumented stirrups were used to measure force. Data were collected during galloping on a synthetic gallop or while riding a racehorse simulator. Jockey kinematics varied more on a real horse compared to the simulator. Greater than double the peak stirrup force was recorded during gallop on real horses compared to the simulator. On the simulator stirrup forces were symmetrical, whereas on a real horse peak forces were higher on the opposite side to the lead limb. Asymmetric forces and lateral movement of the horse and jockey occurs away from the side of the lead leg, likely a result of horse trunk roll. Jockeys maintained a more upright trunk position on a real horse compared to simulator, with no change in pitch. The feet move in phase with the horse and simulator exhibiting similar magnitude displacements in all directions. In contrast the pelvis was in phase with the horse and simulator in the dorso-ventral and medio-lateral axes while a phase shift of 180° was seen in the cranio-caudal direction indicating an inverted pendulum action of the jockey. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The Effects of Filter Cutoff Frequency on Musculoskeletal Simulations of High-Impact Movements.

    PubMed

    Tomescu, Sebastian; Bakker, Ryan; Beach, Tyson A C; Chandrashekar, Naveen

    2018-02-12

    Estimation of muscle forces through musculoskeletal simulation is important in understanding human movement and injury. Unmatched filter frequencies used to low-pass filter marker and force platform data can create artifacts during inverse dynamics analysis, but their effects on muscle force calculations are unknown. The objective of this study was to determine the effects of filter cutoff frequency on simulation parameters and magnitudes of lower extremity muscle and resultant joint contact forces during a high-impact maneuver. Eight participants performed a single leg jump-landing. Kinematics were captured with a 3D motion capture system and ground reaction forces were recorded with a force platform. The marker and force platform data were filtered using two matched filter frequencies (10-10Hz, 15-15Hz) and two unmatched frequencies (10-50Hz, 15-50Hz). Musculoskeletal simulations using Computed Muscle Control were performed in OpenSim. The results revealed significantly higher peak quadriceps (13%), hamstrings (48%), and gastrocnemius forces (69%) in the unmatched (10-50Hz, 15-50Hz) conditions than in the matched (10-10Hz, 15-15Hz) conditions (p<0.05). Resultant joint contact forces and reserve (non-physiologic) moments were similarly larger in the unmatched filter categories (p<0.05). This study demonstrated that artifacts created from filtering with unmatched filter cutoffs result in altered muscle forces and dynamics which are not physiologic.

  19. Internal force corrections with machine learning for quantum mechanics/molecular mechanics simulations.

    PubMed

    Wu, Jingheng; Shen, Lin; Yang, Weitao

    2017-10-28

    Ab initio quantum mechanics/molecular mechanics (QM/MM) molecular dynamics simulation is a useful tool to calculate thermodynamic properties such as potential of mean force for chemical reactions but intensely time consuming. In this paper, we developed a new method using the internal force correction for low-level semiempirical QM/MM molecular dynamics samplings with a predefined reaction coordinate. As a correction term, the internal force was predicted with a machine learning scheme, which provides a sophisticated force field, and added to the atomic forces on the reaction coordinate related atoms at each integration step. We applied this method to two reactions in aqueous solution and reproduced potentials of mean force at the ab initio QM/MM level. The saving in computational cost is about 2 orders of magnitude. The present work reveals great potentials for machine learning in QM/MM simulations to study complex chemical processes.

  20. A combined averaging and frequency mixing approach for force identification in weakly nonlinear high-Q oscillators: Atomic force microscope

    NASA Astrophysics Data System (ADS)

    Sah, Si Mohamed; Forchheimer, Daniel; Borgani, Riccardo; Haviland, David

    2018-02-01

    We present a polynomial force reconstruction of the tip-sample interaction force in Atomic Force Microscopy. The method uses analytical expressions for the slow-time amplitude and phase evolution, obtained from time-averaging over the rapidly oscillating part of the cantilever dynamics. The slow-time behavior can be easily obtained in either the numerical simulations or the experiment in which a high-Q resonator is perturbed by a weak nonlinearity and a periodic driving force. A direct fit of the theoretical expressions to the simulated and experimental data gives the best-fit parameters for the force model. The method combines and complements previous works (Platz et al., 2013; Forchheimer et al., 2012 [2]) and it allows for computationally more efficient parameter mapping with AFM. Results for the simulated asymmetric piecewise linear force and VdW-DMT force models are compared with the reconstructed polynomial force and show a good agreement. It is also shown that the analytical amplitude and phase modulation equations fit well with the experimental data.

  1. Discrete Element Method Simulation of a Boulder Extraction From an Asteroid

    NASA Technical Reports Server (NTRS)

    Kulchitsky, Anton K.; Johnson, Jerome B.; Reeves, David M.; Wilkinson, Allen

    2014-01-01

    The force required to pull 7t and 40t polyhedral boulders from the surface of an asteroid is simulated using the discrete element method considering the effects of microgravity, regolith cohesion and boulder acceleration. The connection between particle surface energy and regolith cohesion is estimated by simulating a cohesion sample tearing test. An optimal constant acceleration is found where the peak net force from inertia and cohesion is a minimum. Peak pulling forces can be further reduced by using linear and quadratic acceleration functions with up to a 40% reduction in force for quadratic acceleration.

  2. Understanding Resonance Graphs Using Easy Java Simulations (EJS) and Why We Use EJS

    ERIC Educational Resources Information Center

    Wee, Loo Kang; Lee, Tat Leong; Chew, Charles; Wong, Darren; Tan, Samuel

    2015-01-01

    This paper reports a computer model simulation created using Easy Java Simulation (EJS) for learners to visualize how the steady-state amplitude of a driven oscillating system varies with the frequency of the periodic driving force. The simulation shows (N = 100) identical spring-mass systems being subjected to (1) a periodic driving force of…

  3. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants

    PubMed Central

    Ewen, James P.; Gattinoni, Chiara; Thakkar, Foram M.; Morgan, Neal; Spikes, Hugh A.; Dini, Daniele

    2016-01-01

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n-hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n-hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n-hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed. PMID:28773773

  4. A Comparison of Classical Force-Fields for Molecular Dynamics Simulations of Lubricants.

    PubMed

    Ewen, James P; Gattinoni, Chiara; Thakkar, Foram M; Morgan, Neal; Spikes, Hugh A; Dini, Daniele

    2016-08-02

    For the successful development and application of lubricants, a full understanding of their complex nanoscale behavior under a wide range of external conditions is required, but this is difficult to obtain experimentally. Nonequilibrium molecular dynamics (NEMD) simulations can be used to yield unique insights into the atomic-scale structure and friction of lubricants and additives; however, the accuracy of the results depend on the chosen force-field. In this study, we demonstrate that the use of an accurate, all-atom force-field is critical in order to; (i) accurately predict important properties of long-chain, linear molecules; and (ii) reproduce experimental friction behavior of multi-component tribological systems. In particular, we focus on n -hexadecane, an important model lubricant with a wide range of industrial applications. Moreover, simulating conditions common in tribological systems, i.e., high temperatures and pressures (HTHP), allows the limits of the selected force-fields to be tested. In the first section, a large number of united-atom and all-atom force-fields are benchmarked in terms of their density and viscosity prediction accuracy of n -hexadecane using equilibrium molecular dynamics (EMD) simulations at ambient and HTHP conditions. Whilst united-atom force-fields accurately reproduce experimental density, the viscosity is significantly under-predicted compared to all-atom force-fields and experiments. Moreover, some all-atom force-fields yield elevated melting points, leading to significant overestimation of both the density and viscosity. In the second section, the most accurate united-atom and all-atom force-field are compared in confined NEMD simulations which probe the structure and friction of stearic acid adsorbed on iron oxide and separated by a thin layer of n -hexadecane. The united-atom force-field provides an accurate representation of the structure of the confined stearic acid film; however, friction coefficients are consistently under-predicted and the friction-coverage and friction-velocity behavior deviates from that observed using all-atom force-fields and experimentally. This has important implications regarding force-field selection for NEMD simulations of systems containing long-chain, linear molecules; specifically, it is recommended that accurate all-atom potentials, such as L-OPLS-AA, are employed.

  5. Ground Reaction Forces During Locomotion in Simulated Microgravity

    NASA Technical Reports Server (NTRS)

    Davis, B. L.; Cavanagh, Peter R.; Sommer, H. J., III; Wu, G.

    1996-01-01

    Significant losses in bone density and mineral, primarily in the lower extremities have been reported following exposure to weightlessness. Recent investigations suggest that mechanical influences such as bone deformation and strain rate may be critically important in stimulating new bone formation. It was hypothesized that velocity, cadence and harness design would significantly affect lower limb impact forces during treadmill exercise in simulated zero gravity (0G). A ground-based hypogravity simulator was used to investigate which factors affect limb loading during tethered treadmill exercise. A fractional factorial design was used and 12 subjects were studied. The results showed that running on active and passive treadmills in the simulator with a tethering force close to the maximum comfortable level produced similar magnitudes for the peak ground reaction force. It was also found that these maximum forces were significantly lower than those obtained during overground trials, even when the speeds of locomotion in the simulator were 66 % greater than those in 1 G. Cadence had no effect on any of the response variables. The maximum rate of force application (DFDT-Max) was similar for overground running and exercise in simulated 0G, provided that the "weightless subjects ran on a motorized treadmill. These findings have implications for the use of treadmill exercise as a countermeasure for hypokinetic osteoporosis. As the relationship between mechanical factors and osteogenesis becomes better understood, results from human experiments in 0G simulators will help to design in-flight exercise programs that are more closely targeted to generate appropriate mechanical stimuli.

  6. Evaluation of DNA Force Fields in Implicit Solvation

    PubMed Central

    Gaillard, Thomas; Case, David A.

    2011-01-01

    DNA structural deformations and dynamics are crucial to its interactions in the cell. Theoretical simulations are essential tools to explore the structure, dynamics, and thermodynamics of biomolecules in a systematic way. Molecular mechanics force fields for DNA have benefited from constant improvements during the last decades. Several studies have evaluated and compared available force fields when the solvent is modeled by explicit molecules. On the other hand, few systematic studies have assessed the quality of duplex DNA models when implicit solvation is employed. The interest of an implicit modeling of the solvent consists in the important gain in the simulation performance and conformational sampling speed. In this study, respective influences of the force field and the implicit solvation model choice on DNA simulation quality are evaluated. To this end, extensive implicit solvent duplex DNA simulations are performed, attempting to reach both conformational and sequence diversity convergence. Structural parameters are extracted from simulations and statistically compared to available experimental and explicit solvation simulation data. Our results quantitatively expose the respective strengths and weaknesses of the different DNA force fields and implicit solvation models studied. This work can lead to the suggestion of improvements to current DNA theoretical models. PMID:22043178

  7. 78 FR 67132 - GPS Satellite Simulator Control Working Group Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-08

    ... DEPARTMENT OF DEFENSE Department of the Air Force GPS Satellite Simulator Control Working Group Meeting AGENCY: Space and Missile Systems Center, Global Positioning Systems (GPS) Directorate, Air Force... Control Working Group (SSCWG) meeting on 6 December 2013 from 0900-1300 PST at Los Angeles Air Force Base...

  8. Scaling laws in granular flow and pedestrian flow

    NASA Astrophysics Data System (ADS)

    Chen, Shumiao; Alonso-Marroquin, Fernando; Busch, Jonathan; Hidalgo, Raúl Cruz; Sathianandan, Charmila; Ramírez-Gómez, Álvaro; Mora, Peter

    2013-06-01

    We use particle-based simulations to examine the flow of particles through an exit. Simulations involve both gravity-driven particles (representing granular material) and velocity-driven particles (mimicking pedestrian dynamics). Contact forces between particles include elastic, viscous, and frictional forces; and simulations use bunker geometry. Power laws are observed in the relation between flow rate and exit width. Simulations of granular flow showed that the power law has little dependence on the coefficient of friction. Polydisperse granular systems produced higher flow rates than those produced by monodisperse ones. We extend the particle model to include the main features of pedestrian dynamics: thoracic shape, shoulder rotation, and desired velocity oriented towards the exit. Higher desired velocity resulted in higher flow rate. Granular simulations always give higher flow rate than pedestrian simulations, despite the values of aspect ratio of the particles. In terms of force distribution, pedestrians and granulates share similar properties with the non-democratic distribution of forces that poses high risks of injuries in a bottleneck situation.

  9. ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins

    PubMed Central

    2015-01-01

    We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495

  10. The impact of volcanic aerosol on the Northern Hemisphere stratospheric polar vortex: mechanisms and sensitivity to forcing structure

    NASA Astrophysics Data System (ADS)

    Toohey, M.; Krüger, K.; Bittner, M.; Timmreck, C.; Schmidt, H.

    2014-12-01

    Observations and simple theoretical arguments suggest that the Northern Hemisphere (NH) stratospheric polar vortex is stronger in winters following major volcanic eruptions. However, recent studies show that climate models forced by prescribed volcanic aerosol fields fail to reproduce this effect. We investigate the impact of volcanic aerosol forcing on stratospheric dynamics, including the strength of the NH polar vortex, in ensemble simulations with the Max Planck Institute Earth System Model. The model is forced by four different prescribed forcing sets representing the radiative properties of stratospheric aerosol following the 1991 eruption of Mt. Pinatubo: two forcing sets are based on observations, and are commonly used in climate model simulations, and two forcing sets are constructed based on coupled aerosol-climate model simulations. For all forcings, we find that simulated temperature and zonal wind anomalies in the NH high latitudes are not directly impacted by anomalous volcanic aerosol heating. Instead, high-latitude effects result from enhancements in stratospheric residual circulation, which in turn result, at least in part, from enhanced stratospheric wave activity. High-latitude effects are therefore much less robust than would be expected if they were the direct result of aerosol heating. Both observation-based forcing sets result in insignificant changes in vortex strength. For the model-based forcing sets, the vortex response is found to be sensitive to the structure of the forcing, with one forcing set leading to significant strengthening of the polar vortex in rough agreement with observation-based expectations. Differences in the dynamical response to the forcing sets imply that reproducing the polar vortex responses to past eruptions, or predicting the response to future eruptions, depends on accurate representation of the space-time structure of the volcanic aerosol forcing.

  11. Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method

    PubMed Central

    Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao

    2016-01-01

    This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661

  12. Comparing Molecular Dynamics Force Fields in the Essential Subspace

    PubMed Central

    Gomez-Puertas, Paulino; Boomsma, Wouter; Lindorff-Larsen, Kresten

    2015-01-01

    The continued development and utility of molecular dynamics simulations requires improvements in both the physical models used (force fields) and in our ability to sample the Boltzmann distribution of these models. Recent developments in both areas have made available multi-microsecond simulations of two proteins, ubiquitin and Protein G, using a number of different force fields. Although these force fields mostly share a common mathematical form, they differ in their parameters and in the philosophy by which these were derived, and previous analyses showed varying levels of agreement with experimental NMR data. To complement the comparison to experiments, we have performed a structural analysis of and comparison between these simulations, thereby providing insight into the relationship between force-field parameterization, the resulting ensemble of conformations and the agreement with experiments. In particular, our results show that, at a coarse level, many of the motional properties are preserved across several, though not all, force fields. At a finer level of detail, however, there are distinct differences in both the structure and dynamics of the two proteins, which can, together with comparison with experimental data, help to select force fields for simulations of proteins. A noteworthy observation is that force fields that have been reparameterized and improved to provide a more accurate energetic description of the balance between helical and coil structures are difficult to distinguish from their “unbalanced” counterparts in these simulations. This observation implies that simulations of stable, folded proteins, even those reaching 10 microseconds in length, may provide relatively little information that can be used to modify torsion parameters to achieve an accurate balance between different secondary structural elements. PMID:25811178

  13. simulation of the DNA force-extension curve

    NASA Astrophysics Data System (ADS)

    Shinaberry, Gregory; Mikhaylov, Ivan; Balaeff, Alexander

    A molecular dynamics simulation study of the force-extension curve of double-stranded DNA is presented. Extended simulations of the DNA at multiple points along the force-extension curve are conducted with DNA end-to-end length constrained at each point. The calculated force-extension curve qualitatively reproduces the experimental one. The DNA conformational ensemble at each extension shows that the famous plateau of the force-extension curve results from B-DNA melting, whereas the formation of the earlier-predicted novel DNA conformation called 'zip-DNA' takes place at extensions past the plateau. An extensive analysis of the DNA conformational ensemble in terms of base configuration, backbone configuration, solvent interaction energy, etc., is conducted in order to elucidate the physical origin of DNA elasticity and the main interactions responsible for the shape of the force-extension curve.

  14. Statistics of velocity gradients in two-dimensional Navier-Stokes and ocean turbulence.

    PubMed

    Schorghofer, Norbert; Gille, Sarah T

    2002-02-01

    Probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two-dimensional Navier-Stokes equations. Ocean data are derived from TOPEX satellite altimeter measurements. The simulations use rapid forcing on large scales, characteristic of surface winds. The probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, although they differ from unforced simulations reported elsewhere. The distribution and cross correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics.

  15. Sensitivity of Force Fields on Mechanical Properties of Metals Predicted by Atomistic Simulations

    NASA Astrophysics Data System (ADS)

    Rassoulinejad-Mousavi, Seyed Moein; Zhang, Yuwen

    Increasing number of micro/nanoscale studies for scientific and engineering applications, leads to huge deployment of atomistic simulations such as molecular dynamics and Monte-Carlo simulation. Many complains from users in the simulation community arises for obtaining wrong results notwithstanding of correct simulation procedure and conditions. Improper choice of force field, known as interatomic potential is the likely causes. For the sake of users' assurance, convenience and time saving, several interatomic potentials are evaluated by molecular dynamics. Elastic properties of multiple FCC and BCC pure metallic species are obtained by LAMMPS, using different interatomic potentials designed for pure species and their alloys at different temperatures. The potentials created based on the Embedded Atom Method (EAM), Modified EAM (MEAM) and ReaX force fields, adopted from available open databases. Independent elastic stiffness constants of cubic single crystals for different metals are obtained. The results are compared with the experimental ones available in the literature and deviations for each force field are provided at each temperature. Using current work, users of these force fields can easily judge on the one they are going to designate for their problem.

  16. Computational simulation of biomolecules transport with multi-physics near microchannel surface for development of biomolecules-detection devices.

    PubMed

    Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming

    2017-01-01

    The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.

  17. A new algorithm for modeling friction in dynamic mechanical systems

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1988-01-01

    A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.

  18. Implicit Solvation Parameters Derived from Explicit Water Forces in Large-Scale Molecular Dynamics Simulations

    PubMed Central

    2012-01-01

    Implicit solvation is a mean force approach to model solvent forces acting on a solute molecule. It is frequently used in molecular simulations to reduce the computational cost of solvent treatment. In the first instance, the free energy of solvation and the associated solvent–solute forces can be approximated by a function of the solvent-accessible surface area (SASA) of the solute and differentiated by an atom–specific solvation parameter σiSASA. A procedure for the determination of values for the σiSASA parameters through matching of explicit and implicit solvation forces is proposed. Using the results of Molecular Dynamics simulations of 188 topologically diverse protein structures in water and in implicit solvent, values for the σiSASA parameters for atom types i of the standard amino acids in the GROMOS force field have been determined. A simplified representation based on groups of atom types σgSASA was obtained via partitioning of the atom–type σiSASA distributions by dynamic programming. Three groups of atom types with well separated parameter ranges were obtained, and their performance in implicit versus explicit simulations was assessed. The solvent forces are available at http://mathbio.nimr.mrc.ac.uk/wiki/Solvent_Forces. PMID:23180979

  19. Force feedback vessel ligation simulator in knot-tying proficiency training.

    PubMed

    Hsu, Justin L; Korndorffer, James R; Brown, Kimberly M

    2016-02-01

    Tying gentle secure knots is an important skill. We have developed a force feedback simulator that measures force exerted during knot tying. This pilot study examines the benefits of this simulator in a deliberate practice curriculum. The simulator consists of silastic tubing with a force sensor. Knot quality was assessed using digital caliper measurement. Participants performed 10 vessel ligations as a pretest, then were shown force readings and tied knots until reaching proficiency targets. Average peak forces precurriculum and postcurriculum were compared using Student t test. Participants exerted significantly less force after completing the curriculum (.61 N ± .22 vs 1.42 N ± .53, P < .001), and had fewer air knots (10% vs 27%). The curriculum was completed in an average of 19.4 ± 6.27 minutes and required an average of 11.7 ± 4.03 knots to reach proficiency. This study demonstrates the feasibility of real-time feedback in learning to tie delicate knots. The curriculum can be completed in a reasonable amount of time, and may also work as a warm-up exercise before a surgical case. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Parameterization of Ca+2-protein interactions for molecular dynamics simulations.

    PubMed

    Project, Elad; Nachliel, Esther; Gutman, Menachem

    2008-05-01

    Molecular dynamics simulations of Ca+2 ions near protein were performed with three force fields: GROMOS96, OPLS-AA, and CHARMM22. The simulations reveal major, force-field dependent, inconsistencies in the interaction between the Ca+2 ions with the protein. The variations are attributed to the nonbonded parameterizations of the Ca+2-carboxylates interactions. The simulations results were compared to experimental data, using the Ca+2-HCOO- equilibrium as a model. The OPLS-AA force field grossly overestimates the binding affinity of the Ca+2 ions to the carboxylate whereas the GROMOS96 and CHARMM22 force fields underestimate the stability of the complex. Optimization of the Lennard-Jones parameters for the Ca+2-carboxylate interactions were carried out, yielding new parameters which reproduce experimental data. Copyright 2007 Wiley Periodicals, Inc.

  1. Hydrological assessment of atmospheric forcing uncertainty in the Euro-Mediterranean area using a land surface model

    NASA Astrophysics Data System (ADS)

    Gelati, Emiliano; Decharme, Bertrand; Calvet, Jean-Christophe; Minvielle, Marie; Polcher, Jan; Fairbairn, David; Weedon, Graham P.

    2018-04-01

    Physically consistent descriptions of land surface hydrology are crucial for planning human activities that involve freshwater resources, especially in light of the expected climate change scenarios. We assess how atmospheric forcing data uncertainties affect land surface model (LSM) simulations by means of an extensive evaluation exercise using a number of state-of-the-art remote sensing and station-based datasets. For this purpose, we use the CO2-responsive ISBA-A-gs LSM coupled with the CNRM version of the Total Runoff Integrated Pathways (CTRIP) river routing model. We perform multi-forcing simulations over the Euro-Mediterranean area (25-75.5° N, 11.5° W-62.5° E, at 0.5° resolution) from 1979 to 2012. The model is forced using four atmospheric datasets. Three of them are based on the ERA-Interim reanalysis (ERA-I). The fourth dataset is independent from ERA-Interim: PGF, developed at Princeton University. The hydrological impacts of atmospheric forcing uncertainties are assessed by comparing simulated surface soil moisture (SSM), leaf area index (LAI) and river discharge against observation-based datasets: SSM from the European Space Agency's Water Cycle Multi-mission Observation Strategy and Climate Change Initiative projects (ESA-CCI), LAI of the Global Inventory Modeling and Mapping Studies (GIMMS), and Global Runoff Data Centre (GRDC) river discharge. The atmospheric forcing data are also compared to reference datasets. Precipitation is the most uncertain forcing variable across datasets, while the most consistent are air temperature and SW and LW radiation. At the monthly timescale, SSM and LAI simulations are relatively insensitive to forcing uncertainties. Some discrepancies with ESA-CCI appear to be forcing-independent and may be due to different assumptions underlying the LSM and the remote sensing retrieval algorithm. All simulations overestimate average summer and early-autumn LAI. Forcing uncertainty impacts on simulated river discharge are larger on mean values and standard deviations than on correlations with GRDC data. Anomaly correlation coefficients are not inferior to those computed from raw monthly discharge time series, indicating that the model reproduces inter-annual variability fairly well. However, simulated river discharge time series generally feature larger variability compared to measurements. They also tend to overestimate winter-spring high flows and underestimate summer-autumn low flows. Considering that several differences emerge between simulations and reference data, which may not be completely explained by forcing uncertainty, we suggest several research directions. These range from further investigating the discrepancies between LSMs and remote sensing retrievals to developing new model components to represent physical and anthropogenic processes.

  2. Developments of new force reflecting control schemes and an application to a teleoperation training simulator

    NASA Technical Reports Server (NTRS)

    Kim, Won S.

    1992-01-01

    Two schemes of force reflecting control, position-error based force reflection and low-pass-filtered force reflection, both combined with shared compliance control, were developed for dissimilar master-slave arms. These schemes enabled high force reflection gains, which were not possible with a conventional scheme when the slave arm was much stiffer than the master arm. The experimental results with a peg-in-hole task indicated that the newly force reflecting control schemes combined with compliance control resulted in best task performances. As a related application, a simulated force reflection/shared compliance control teleoperation trainer was developed that provided the operator with the feel of kinesthetic force virtual reality.

  3. Numerical Simulation of Ion Transport in a Nano-Electrospray Ion Source at Atmospheric Pressure

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Bajic, Steve; John, Benzi; Emerson, David R.

    2018-03-01

    Understanding ion transport properties from the ion source to the mass spectrometer (MS) is essential for optimizing device performance. Numerical simulation helps in understanding of ion transport properties and, furthermore, facilitates instrument design. In contrast to previously reported numerical studies, ion transport simulations in a continuous injection mode whilst considering realistic space-charge effects have been carried out. The flow field was solved using Reynolds-averaged Navier-Stokes (RANS) equations, and a particle-in-cell (PIC) method was applied to solve a time-dependent electric field with local charge density. A series of ion transport simulations were carried out at different cone gas flow rates, ion source currents, and capillary voltages. A force evaluation analysis reveals that the electric force, the drag force, and the Brownian force are the three dominant forces acting on the ions. Both the experimental and simulation results indicate that cone gas flow rates of ≤250 slph (standard liter per hour) are important for high ion transmission efficiency, as higher cone gas flow rates reduce the ion signal significantly. The simulation results also show that the ion transmission efficiency reduces exponentially with an increased ion source current. Additionally, the ion loss due to space-charge effects has been found to be predominant at a higher ion source current, a lower capillary voltage, and a stronger cone gas counterflow. The interaction of the ion driving force, ion opposing force, and ion dispersion is discussed to illustrate ion transport mechanism in the ion source at atmospheric pressure. [Figure not available: see fulltext.

  4. Numerical Simulation of Ion Transport in a Nano-Electrospray Ion Source at Atmospheric Pressure.

    PubMed

    Wang, Wei; Bajic, Steve; John, Benzi; Emerson, David R

    2018-03-01

    Understanding ion transport properties from the ion source to the mass spectrometer (MS) is essential for optimizing device performance. Numerical simulation helps in understanding of ion transport properties and, furthermore, facilitates instrument design. In contrast to previously reported numerical studies, ion transport simulations in a continuous injection mode whilst considering realistic space-charge effects have been carried out. The flow field was solved using Reynolds-averaged Navier-Stokes (RANS) equations, and a particle-in-cell (PIC) method was applied to solve a time-dependent electric field with local charge density. A series of ion transport simulations were carried out at different cone gas flow rates, ion source currents, and capillary voltages. A force evaluation analysis reveals that the electric force, the drag force, and the Brownian force are the three dominant forces acting on the ions. Both the experimental and simulation results indicate that cone gas flow rates of ≤250 slph (standard liter per hour) are important for high ion transmission efficiency, as higher cone gas flow rates reduce the ion signal significantly. The simulation results also show that the ion transmission efficiency reduces exponentially with an increased ion source current. Additionally, the ion loss due to space-charge effects has been found to be predominant at a higher ion source current, a lower capillary voltage, and a stronger cone gas counterflow. The interaction of the ion driving force, ion opposing force, and ion dispersion is discussed to illustrate ion transport mechanism in the ion source at atmospheric pressure. Graphical Abstract.

  5. Computing fluid-particle interaction forces for nano-suspension droplet spreading: molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Zhou, Weizhou; Shi, Baiou; Webb, Edmund

    2017-11-01

    Recently, there are many experimental and theoretical studies to understand and control the dynamic spreading of nano-suspension droplets on solid surfaces. However, fundamental understanding of driving forces dictating the kinetics of nano-suspension wetting and spreading, especially capillary forces that manifest during the process, is lacking. Here, we present results from atomic scale simulations that were used to compute forces between suspended particles and advancing liquid fronts. The role of nano-particle size, particle loading, and interaction strength on forces computed from simulations will be discussed. Results demonstrate that increasing the particle size dramatically changes observed wetting behavior from depinning to pinning. From simulations on varying particle size, a relationship between computed forces and particle size is advanced and compared to existing expressions in the literature. High particle loading significantly slowed spreading kinetics, by introducing tortuous transport paths for liquid delivery to the advancing contact line. Lastly, we show how weakening the interaction between the particle and the underlying substrate can change a system from exhibiting pinning behavior to de-pinning.

  6. The relationship between quadriceps muscle force, knee flexion, and anterior cruciate ligament strain in an in vitro simulated jump landing.

    PubMed

    Withrow, Thomas J; Huston, Laura J; Wojtys, Edward M; Ashton-Miller, James A

    2006-02-01

    An instrumented cadaveric knee construct was used to quantify the association between impact force, quadriceps force, knee flexion angle, and anterior cruciate ligament relative strain in simulated unipedal jump landings. Anterior cruciate ligament strain will correlate with impact force, quadriceps force, and knee flexion angle. Descriptive laboratory study. Eleven cadaveric knees (age, 70.8 [19.3] years; 5 male; 6 female) were mounted in a custom fixture with the tibia and femur secured to a triaxial load cell. Quadriceps, hamstring, and gastrocnemius muscle forces were simulated using pretensioned steel cables (stiffness, 7 kN/cm), and the quadriceps tendon force was measured using a load cell. Mean strain on the anteromedial bundle of the anterior cruciate ligament was measured using a DVRT. With the knee in 25 degrees of flexion, the construct was vertically loaded by an impact force initially directed 4 cm posterior to the knee joint center. Tibiofemoral kinematics was measured using a 3D optoelectronic tracking system. The increase in anterior cruciate ligament relative strain was proportional to the increase in quadriceps force (r(2) = 0.74; P < .00001) and knee flexion angle (r(2) = 0.88; P < .00001) but was not correlated with the impact force (r(2) = 0.009; P = .08). The increase in knee flexion and quadriceps force during this simulated 1-footed landing strongly influenced the relative strain on the anteromedial bundle of the anterior cruciate ligament. These results suggest that even in the presence of knee flexor muscle forces, the increase in quadriceps force required to prevent the knee from flexing during landing can place the anterior cruciate ligament at risk for large strains.

  7. 49 CFR 213.345 - Vehicle/track system qualification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... in accordance with the requirements of this paragraph (c). (1) Simulations or measurement of wheel/rail forces. For vehicle types intended to operate at track Class 6 speeds, simulations or measurement... exceed the wheel/rail force safety limits specified in § 213.333. Simulations, if conducted, shall be in...

  8. 49 CFR 213.345 - Vehicle/track system qualification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in accordance with the requirements of this paragraph (c). (1) Simulations or measurement of wheel/rail forces. For vehicle types intended to operate at track Class 6 speeds, simulations or measurement... exceed the wheel/rail force safety limits specified in § 213.333. Simulations, if conducted, shall be in...

  9. 77 FR 25150 - GPS Satellite Simulator Working Group; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-27

    ...-1600 (Pacific Standard Time). This meeting notice is to inform the public that the Global Positioning... DEPARTMENT OF DEFENSE Department of the Air Force GPS Satellite Simulator Working Group; Notice of Meeting AGENCY: The United States Air Force, DoD. ACTION: Amending GPS Simulator Working group Meeting...

  10. 78 FR 63459 - GPS Satellite Simulator Control Working Group Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-24

    ... DEPARTMENT OF DEFENSE Air Force GPS Satellite Simulator Control Working Group Meeting AGENCY... and DoD contractors, that the GPS Directorate will host a GPS Satellite Simulator Control Working Group (SSCWG) meeting on 1 November 2013 from 0900-1300 PST at Los Angeles Air Force Base. The purpose...

  11. The PMIP4 contribution to CMIP6 - Part 3: The last millennium, scientific objective, and experimental design for the PMIP4 past1000 simulations

    NASA Astrophysics Data System (ADS)

    Jungclaus, Johann H.; Bard, Edouard; Baroni, Mélanie; Braconnot, Pascale; Cao, Jian; Chini, Louise P.; Egorova, Tania; Evans, Michael; Fidel González-Rouco, J.; Goosse, Hugues; Hurtt, George C.; Joos, Fortunat; Kaplan, Jed O.; Khodri, Myriam; Klein Goldewijk, Kees; Krivova, Natalie; LeGrande, Allegra N.; Lorenz, Stephan J.; Luterbacher, Jürg; Man, Wenmin; Maycock, Amanda C.; Meinshausen, Malte; Moberg, Anders; Muscheler, Raimund; Nehrbass-Ahles, Christoph; Otto-Bliesner, Bette I.; Phipps, Steven J.; Pongratz, Julia; Rozanov, Eugene; Schmidt, Gavin A.; Schmidt, Hauke; Schmutz, Werner; Schurer, Andrew; Shapiro, Alexander I.; Sigl, Michael; Smerdon, Jason E.; Solanki, Sami K.; Timmreck, Claudia; Toohey, Matthew; Usoskin, Ilya G.; Wagner, Sebastian; Wu, Chi-Ju; Leng Yeo, Kok; Zanchettin, Davide; Zhang, Qiong; Zorita, Eduardo

    2017-11-01

    The pre-industrial millennium is among the periods selected by the Paleoclimate Model Intercomparison Project (PMIP) for experiments contributing to the sixth phase of the Coupled Model Intercomparison Project (CMIP6) and the fourth phase of the PMIP (PMIP4). The past1000 transient simulations serve to investigate the response to (mainly) natural forcing under background conditions not too different from today, and to discriminate between forced and internally generated variability on interannual to centennial timescales. This paper describes the motivation and the experimental set-ups for the PMIP4-CMIP6 past1000 simulations, and discusses the forcing agents orbital, solar, volcanic, and land use/land cover changes, and variations in greenhouse gas concentrations. The past1000 simulations covering the pre-industrial millennium from 850 Common Era (CE) to 1849 CE have to be complemented by historical simulations (1850 to 2014 CE) following the CMIP6 protocol. The external forcings for the past1000 experiments have been adapted to provide a seamless transition across these time periods. Protocols for the past1000 simulations have been divided into three tiers. A default forcing data set has been defined for the Tier 1 (the CMIP6 past1000) experiment. However, the PMIP community has maintained the flexibility to conduct coordinated sensitivity experiments to explore uncertainty in forcing reconstructions as well as parameter uncertainty in dedicated Tier 2 simulations. Additional experiments (Tier 3) are defined to foster collaborative model experiments focusing on the early instrumental period and to extend the temporal range and the scope of the simulations. This paper outlines current and future research foci and common analyses for collaborative work between the PMIP and the observational communities (reconstructions, instrumental data).

  12. The effect of changing wind forcing on Antarctic ice shelf melting in high-resolution, global sea ice-ocean simulations with the Accelerated Climate Model for Energy (ACME)

    NASA Astrophysics Data System (ADS)

    Asay-Davis, Xylar; Price, Stephen; Petersen, Mark; Wolfe, Jonathan

    2017-04-01

    The capability for simulating sub-ice shelf circulation and submarine melting and freezing has recently been added to the U.S. Department of Energy's Accelerated Climate Model for Energy (ACME). With this new capability, we use an eddy permitting ocean model to conduct two sets of simulations in the spirit of Spence et al. (GRL, 41, 2014), who demonstrate increased warm water upwelling along the Antarctic coast in response to poleward shifting and strengthening of Southern Ocean westerly winds. These characteristics, symptomatic of a positive Southern Annular Mode (SAM), are projected to continue into the 21st century under anthropogenic climate change (Fyfe et al., J. Clim., 20, 2007). In our first simulation, we force the climate model using the standard CORE interannual forcing dataset (Large and Yeager; Clim. Dyn., 33, 2009). In our second simulation, we force our climate model using an altered version of CORE interannual forcing, based on the latter half of the full time series, which we take as a proxy for a future climate state biased towards a positive SAM. We compare ocean model states and sub-ice shelf melt rates with observations, exploring sources of model biases as well as the effects of the two forcing scenarios.

  13. Driving-forces model on individual behavior in scenarios considering moving threat agents

    NASA Astrophysics Data System (ADS)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  14. Classical force field for hydrofluorocarbon molecular simulations. Application to the study of gas solubility in poly(vinylidene fluoride).

    PubMed

    Lachet, V; Teuler, J-M; Rousseau, B

    2015-01-08

    A classical all-atoms force field for molecular simulations of hydrofluorocarbons (HFCs) has been developed. Lennard-Jones force centers plus point charges are used to represent dispersion-repulsion and electrostatic interactions. Parametrization of this force field has been performed iteratively using three target properties of pentafluorobutane: the quantum energy of an isolated molecule, the dielectric constant in the liquid phase, and the compressed liquid density. The accuracy and transferability of this new force field has been demonstrated through the simulation of different thermophysical properties of several fluorinated compounds, showing significant improvements compared to existing models. This new force field has been applied to study solubilities of several gases in poly(vinylidene fluoride) (PVDF) above the melting temperature of this polymer. The solubility of CH4, CO2, H2S, H2, N2, O2, and H2O at infinite dilution has been computed using test particle insertions in the course of a NpT hybrid Monte Carlo simulation. For CH4, CO2, and their mixtures, some calculations beyond the Henry regime have also been performed using hybrid Monte Carlo simulations in the osmotic ensemble, allowing both swelling and solubility determination. An ideal mixing behavior is observed, with identical solubility coefficients in the mixtures and in pure gas systems.

  15. Is optimal paddle force applied during paediatric external defibrillation?

    PubMed

    Bennetts, Sarah H; Deakin, Charles D; Petley, Graham W; Clewlow, Frank

    2004-01-01

    Optimal paddle force minimises transthoracic impedance; a factor associated with increased defibrillation success. Optimal force for the defibrillation of children < or =10 kg using paediatric paddles has previously been shown to be 2.9 kgf, and for children >10 kg using adult paddles is 5.1 kgf. We compared defibrillation paddle force applied during simulated paediatric defibrillation with these optimal values. 72 medical and nursing staff who would be expected to perform paediatric defibrillation were recruited from a University teaching hospital. Participants, blinded to the nature of the study, were asked to simulate defibrillation of an infant manikin (9 months of age) and a child manikin (6 years of age) using paediatric or adult paddles, respectively, according to guidelines. Paddle force (kgf) was measured at the time of simulated shock and compared with known optimal values. Median paddle force applied to the infant manikin was 2.8 kgf (max 9.6, min 0.6), with only 47% operators attaining optimal force. Median paddle force applied to the child manikin was 3.8 kgf (max 10.2, min 1.0), with only 24% of operators attaining optimal force. Defibrillation paddle force applied during paediatric defibrillation often falls below optimal values.

  16. Development of a high-resolution emission inventory and its evaluation and application through air quality modeling for Jiangsu Province, China

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhou, Yaduan; Mao, Pan; Zhang, Jie

    2017-04-01

    Improved emission inventories combining detailed source information are crucial for better understanding the atmospheric chemistry and effectively making emission control policies using air quality simulation, particularly at regional or local scales. With the downscaled inventories directly applied, chemical transport model might not be able to reproduce the authentic evolution of atmospheric pollution processes at small spatial scales. Using the bottom-up approach, a high-resolution emission inventory was developed for Jiangsu China, including SO2, NOx, CO, NH3, volatile organic compounds (VOCs), total suspended particulates (TSP), PM10, PM2.5, black carbon (BC), organic carbon (OC), and CO2. The key parameters relevant to emission estimation for over 6000 industrial sources were investigated, compiled and revised at plant level based on various data sources and on-site survey. As a result, the emission fractions of point sources were significantly elevated for most species. The improvement of this provincial inventory was evaluated through comparisons with other inventories at larger spatial scales, using satellite observation and air quality modeling. Compared to the downscaled Multi-resolution Emission Inventory for China (MEIC), the spatial distribution of NOX emissions in our provincial inventory was more consistent with summer tropospheric NO2 VCDs observed from OMI, particularly for the grids with moderate emission levels, implying the improved emission estimation for small and medium industrial plants by this work. Three inventories (national, regional, and provincial by this work) were applied in the Models-3/Community Multi-scale Air Quality (CMAQ) system for southern Jiangsu October 2012, to evaluate the model performances with different emission inputs. The best agreement between available ground observation and simulation was found when the provincial inventory was applied, indicated by the smallest normalized mean bias (NMB) and normalized mean errors (NME) for all the concerned species SO2, NO2, O3 and PM2.5. The result thus implied the advantage of improved emission inventory at local scale for high resolution air quality modeling. Under the unfavorable meteorology in which horizontal and vertical movement of atmosphere was limited, the simulated SO2 concentrations at downtown Nanjing (the capital city of Jiangsu) using the regional or national inventories were much higher than observation, implying the overestimated urban emissions when economy or population densities were applied to downscale or allocate the emissions. With more accurate spatial distribution of emissions at city level, the simulated concentrations using the provincial inventory were much closer to observation. Sensitivity analysis of PM2.5 and O3 formation was conducted using the improved provincial inventory through the Brute Force method. Iron & steel and cement plants were identified as important contributors to the PM2.5 concentrations in Nanjing. The O3 formation was VOCs-limited in southern Jiangsu, and the concentrations were negatively correlated with NOX emissions in urban areas owing to the accumulated NOx from transportation. More evaluations are further suggested for the impacts of speciation and temporal and vertical distribution of emissions on air quality modeling at regional or local scales in China.

  17. Development of a high-resolution emission inventory and its evaluation and application through air quality modeling for Jiangsu Province, China

    NASA Astrophysics Data System (ADS)

    Zhou, Yaduan; Zhao, Yu; Mao, Pan; Zhang, Qiang; Zhang, Jie; Qiu, Liping; Yang, Yang

    2017-01-01

    Improved emission inventories combining detailed source information are crucial for better understanding of the atmospheric chemistry and effectively making emission control policies using air quality simulation, particularly at regional or local scales. With the downscaled inventories directly applied, chemical transport models might not be able to reproduce the authentic evolution of atmospheric pollution processes at small spatial scales. Using the bottom-up approach, a high-resolution emission inventory was developed for Jiangsu China, including SO2, NOx, CO, NH3, volatile organic compounds (VOCs), total suspended particulates (TSP), PM10, PM2.5, black carbon (BC), organic carbon (OC), and CO2. The key parameters relevant to emission estimation for over 6000 industrial sources were investigated, compiled, and revised at plant level based on various data sources and on-site surveys. As a result, the emission fractions of point sources were significantly elevated for most species. The improvement of this provincial inventory was evaluated through comparisons with other inventories at larger spatial scales, using satellite observation and air quality modeling. Compared to the downscaled Multi-resolution Emission Inventory for China (MEIC), the spatial distribution of NOx emissions in our provincial inventory was more consistent with summer tropospheric NO2 VCDs observed from OMI, particularly for the grids with moderate emission levels, implying the improved emission estimation for small and medium industrial plants by this work. Three inventories (national, regional, and provincial by this work) were applied in the Models-3 Community Multi-scale Air Quality (CMAQ) system for southern Jiangsu October 2012, to evaluate the model performances with different emission inputs. The best agreement between available ground observation and simulation was found when the provincial inventory was applied, indicated by the smallest normalized mean bias (NMB) and normalized mean errors (NME) for all the concerned species SO2, NO2, O3, and PM2.5. The result thus implied the advantage of improved emission inventory at local scale for high-resolution air quality modeling. Under the unfavorable meteorology in which horizontal and vertical movement of atmosphere was limited, the simulated SO2 concentrations at downtown Nanjing (the capital city of Jiangsu) using the regional or national inventories were much higher than those observed, implying that the urban emissions were overestimated when economy or population densities were applied to downscale or allocate the emissions. With more accurate spatial distribution of emissions at city level, the simulated concentrations using the provincial inventory were much closer to observation. Sensitivity analysis of PM2.5 and O3 formation was conducted using the improved provincial inventory through the brute force method. Iron and steel plants and cement plants were identified as important contributors to the PM2.5 concentrations in Nanjing. The O3 formation was VOC-limited in southern Jiangsu, and the concentrations were negatively correlated with NOx emissions in urban areas owing to the accumulated NOx from transportation. More evaluations are further suggested for the impacts of speciation and temporal and vertical distribution of emissions on air quality modeling at regional or local scales in China.

  18. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  19. Perceptual Performance Impact of GPU-Based WARP and Anti-Aliasing for Image Generators

    DTIC Science & Technology

    2016-06-29

    with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator. This 15-channel, 150...ABSTRACT In 2012 the U.S. Air Force School of Aerospace Medicine, in partnership with the Air Force Research Laboratory (AFRL) and NASA AMES...with the Air Force Research Laboratory (AFRL) and NASA AMES, constructed the Operational Based Vision Assessment (OBVA) simulator to evaluate the

  20. Decrease of airway smooth muscle contractility induced by simulated breathing maneuvers is not simply proportional to strain.

    PubMed

    Pascoe, Chris D; Seow, Chun Y; Paré, Peter D; Bossé, Ynuk

    2013-02-01

    The lung is a dynamic organ and the oscillating stress applied to the airway wall during breathing maneuvers can decrease airway smooth muscle (ASM) contractility. However, it is unclear whether it is the stress or the attendant strain that is responsible for the decline of ASM force associated with breathing maneuvers, and whether tone can prevent the decline of force by attenuating the strain. To investigate these questions, ovine tracheal strips were subjected to oscillating stress that simulates breathing maneuvers, and the resulting strain and decline of force were measured in the absence or presence of different levels of tone elicited by acetylcholine. In relaxed ASM, high stress simulating 20 cm H(2)O-transpulmonary pressure excursions strained ASM strips by 20.7% and decreased force by 17.1%. When stress oscillations were initiated during measurement of ACh concentration-response curves, tone almost abrogated strain at an ACh concentration of 10(-6) M (1.1%) but the decline of force was not affected (18.9%). When stress oscillations were initiated after ACh-induced contraction had reached its maximal force, strain was almost abrogated at an ACh concentration of 10(-6) M (0.9%) and the decline of force was attenuated (10.1%). However, even at the highest ACh concentration (10(-4) M), substantial decline of force (6.1%) was still observed despite very small strain (0.7%). As expected, the results indicate that tone attenuated the strain experienced by ASM during breathing maneuver simulations. More surprisingly, the reduction of strain induced by tone was not proportional to its effect on the decline of force induced by simulated breathing maneuvers.

  1. Dissolution study of active pharmaceutical ingredients using molecular dynamics simulations with classical force fields

    NASA Astrophysics Data System (ADS)

    Greiner, Maximilian; Elts, Ekaterina; Schneider, Julian; Reuter, Karsten; Briesen, Heiko

    2014-11-01

    The CHARMM, general Amber and OPLS force fields are evaluated for their suitability in simulating the molecular dynamics of the dissolution of the hydrophobic, small-molecule active pharmaceutical ingredients aspirin, ibuprofen, and paracetamol in aqueous media. The force fields are evaluated by comparison with quantum chemical simulations or experimental references on the basis of the following capabilities: accurately representing intra- and intermolecular interactions, appropriately reproducing crystal lattice parameters, adequately describing thermodynamic properties, and the qualitative description of the dissolution behavior. To make this approach easily accessible for evaluating the dissolution properties of novel drug candidates in the early stage of drug development, the force field parameter files are generated using online resources such as the SWISS PARAM servers, and the software packages ACPYPE and Maestro. All force fields are found to reproduce the intermolecular interactions with a reasonable degree of accuracy, with the general Amber and CHARMM force fields showing the best agreement with quantum mechanical calculations. A stable crystal bulk structure is obtained for all model substances, except for ibuprofen, where the reproductions of the lattice parameters and observed crystal stability are considerably poor for all force fields. The heat of solution used to evaluate the solid-to-solution phase transitions is found to be in qualitative agreement with the experimental data for all combinations tested, with the results being quantitatively optimum for the general Amber and CHARMM force fields. For aspirin and paracetamol, stable crystal-water interfaces were obtained. The (100), (110), (011) and (001) interfaces of aspirin or paracetamol and water were simulated for each force field for 30 ns. Although generally expected as a rare event, in some of the simulations, dissolution is observed at 310 K and ambient pressure conditions.

  2. Exploration of Force Transition in Stability Operations Using Multi-Agent Simulation

    DTIC Science & Technology

    2006-09-01

    risk, mission failure risk, and time in the context of the operational threat environment. The Pythagoras Multi-Agent Simulation and Data Farming...NUMBER OF PAGES 173 14. SUBJECT TERMS Stability Operations, Peace Operations, Data Farming, Pythagoras , Agent- Based Model, Multi-Agent Simulation...the operational threat environment. The Pythagoras Multi-Agent Simulation and Data Farming techniques are used to investigate force-level

  3. Effects of force fields on the conformational and dynamic properties of amyloid β(1-40) dimer explored by replica exchange molecular dynamics simulations.

    PubMed

    Watts, Charles R; Gregory, Andrew; Frisbie, Cole; Lovas, Sándor

    2018-03-01

    The conformational space and structural ensembles of amyloid beta (Aβ) peptides and their oligomers in solution are inherently disordered and proven to be challenging to study. Optimum force field selection for molecular dynamics (MD) simulations and the biophysical relevance of results are still unknown. We compared the conformational space of the Aβ(1-40) dimers by 300 ns replica exchange MD simulations at physiological temperature (310 K) using: the AMBER-ff99sb-ILDN, AMBER-ff99sb*-ILDN, AMBER-ff99sb-NMR, and CHARMM22* force fields. Statistical comparisons of simulation results to experimental data and previously published simulations utilizing the CHARMM22* and CHARMM36 force fields were performed. All force fields yield sampled ensembles of conformations with collision cross sectional areas for the dimer that are statistically significantly larger than experimental results. All force fields, with the exception of AMBER-ff99sb-ILDN (8.8 ± 6.4%) and CHARMM36 (2.7 ± 4.2%), tend to overestimate the α-helical content compared to experimental CD (5.3 ± 5.2%). Using the AMBER-ff99sb-NMR force field resulted in the greatest degree of variance (41.3 ± 12.9%). Except for the AMBER-ff99sb-NMR force field, the others tended to under estimate the expected amount of β-sheet and over estimate the amount of turn/bend/random coil conformations. All force fields, with the exception AMBER-ff99sb-NMR, reproduce a theoretically expected β-sheet-turn-β-sheet conformational motif, however, only the CHARMM22* and CHARMM36 force fields yield results compatible with collapse of the central and C-terminal hydrophobic cores from residues 17-21 and 30-36. Although analyses of essential subspace sampling showed only minor variations between force fields, secondary structures of lowest energy conformers are different. © 2017 Wiley Periodicals, Inc.

  4. Study on magnetic force of electromagnetic levitation circular knitting machine

    NASA Astrophysics Data System (ADS)

    Wu, X. G.; Zhang, C.; Xu, X. S.; Zhang, J. G.; Yan, N.; Zhang, G. Z.

    2018-06-01

    The structure of the driving coil and the electromagnetic force of the test prototype of electromagnetic-levitation (EL) circular knitting machine are studied. In this paper, the driving coil’s structure and working principle of the EL circular knitting machine are firstly introduced, then the mathematical modelling analysis of the driving electromagnetic force is carried out, and through the Ansoft Maxwell finite element simulation software the coil’s magnetic induction intensity and the needle’s electromagnetic force is simulated, finally an experimental platform is built to measure the coil’s magnetic induction intensity and the needle’s electromagnetic force. The results show that the theoretical analysis, the simulation analysis and the results of the test are very close, which proves the correctness of the proposed model.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sotomayor, Marcos

    Hair cell mechanotransduction happens in tens of microseconds, involves forces of a few picoNewtons, and is mediated by nanometer-scale molecular conformational changes. As proteins involved in this process become identified and their high resolution structures become available, multiple tools are being used to explore their “single-molecule responses” to force. Optical tweezers and atomic force microscopy offer exquisite force and extension resolution, but cannot reach the high loading rates expected for high frequency auditory stimuli. Molecular dynamics (MD) simulations can reach these fast time scales, and also provide a unique view of the molecular events underlying protein mechanics, but its predictionsmore » must be experimentally verified. Thus a combination of simulations and experiments might be appropriate to study the molecular mechanics of hearing. Here I review the basics of MD simulations and the different methods used to apply force and study protein mechanics in silico. Simulations of tip link proteins are used to illustrate the advantages and limitations of this method.« less

  6. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  7. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  8. PP and PS interferometric images of near-seafloor sediments

    USGS Publications Warehouse

    Haines, S.S.

    2011-01-01

    I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.

  9. A case study of the fluid structure interaction of a Francis turbine

    NASA Astrophysics Data System (ADS)

    Müller, C.; Staubli, T.; Baumann, R.; Casartelli, E.

    2014-03-01

    The Francis turbine runners of the Grimsel 2 pump storage power plant showed repeatedly cracks during the last decade. It is assumed that these cracks were caused by flow induced forces acting on blades and eventual resonant runner vibrations lead to high stresses in the blade root areas. The eigenfrequencies of the runner were simulated in water using acoustic elements and compared to experimental data. Unsteady blades pressure distribution determined by a transient CFD simulation of the turbine were coupled to a FEM simulation. The FEM simulation enabled analyzing the stresses in the runner and the eigenmodes of the runner vibrations. For a part-load operating point, transient CFD simulations of the entire turbine, including the spiral case, the runner and the draft tube were carried out. The most significant loads on the turbine runner resulted from the centrifugal forces and the fluid forces. Such forces effect temporally invariant runner blades loads, in contrast rotor stator interaction or draft tube instabilities induce pressure fluctuations which cause the temporally variable forces. The blades pressure distribution resulting from the flow simulation was coupled by unidirectional-harmonic FEM simulation. The dominant transient blade pressure distribution of the CFD simulation were Fourier transformed, and the static and harmonic portion assigned to the blade surfaces in the FEM model. The evaluation of the FEM simulation showed that the simulated part load operating point do not cause critical stress peaks in the crack zones. The pressure amplitudes and frequencies are very small and interact only locally with the runner blades. As the frequencies are far below the modal frequencies of the turbine runner, resonant vibrations obviously are not excited.

  10. Evaluation of reactive force fields for prediction of the thermo-mechanical properties of cellulose Iâ

    Treesearch

    Fernando L. Dri; Xiawa Wu; Robert J. Moon; Ashlie Martini; Pablo D. Zavattieri

    2015-01-01

    Molecular dynamics simulation is commonly used to study the properties of nanocellulose-based materials at the atomic scale. It is well known that the accuracy of these simulations strongly depends on the force field that describes energetic interactions. However, since there is no force field developed specifically for cellulose, researchers utilize models...

  11. Force Measurement on the GLAST Delta II Flight

    NASA Technical Reports Server (NTRS)

    Gordon, Scott; Kaufman, Daniel

    2009-01-01

    This viewgraph presentation reviews the interface force measurement at spacecraft separation of GLAST Delta II. The contents include: 1) Flight Force Measurement (FFM) Background; 2) Team Members; 3) GLAST Mission Overview; 4) Methodology Development; 5) Ground Test Validation; 6) Flight Data; 7) Coupled Loads Simulation (VCLA & Reconstruction); 8) Basedrive Simulation; 9) Findings; and 10) Summary and Conclusions.

  12. A virtual reality based simulator for learning nasogastric tube placement.

    PubMed

    Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong

    2015-02-01

    Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Intercomparison of oceanic and atmospheric forced and coupled mesoscale simulations. Part I: Surface fluxes

    NASA Astrophysics Data System (ADS)

    Josse, P.; Caniaux, G.; Giordani, H.; Planton, S.

    1999-04-01

    A mesoscale non-hydrostatic atmospheric model has been coupled with a mesoscale oceanic model. The case study is a four-day simulation of a strong storm event observed during the SEMAPHORE experiment over a 500 × 500 km2 domain. This domain encompasses a thermohaline front associated with the Azores current. In order to analyze the effect of mesoscale coupling, three simulations are compared: the first one with the atmospheric model forced by realistic sea surface temperature analyses; the second one with the ocean model forced by atmospheric fields, derived from weather forecast re-analyses; the third one with the models being coupled. For these three simulations the surface fluxes were computed with the same bulk parametrization. All three simulations succeed well in representing the main oceanic or atmospheric features observed during the storm. Comparison of surface fields with in situ observations reveals that the winds of the fine mesh atmospheric model are more realistic than those of the weather forecast re-analyses. The low-level winds simulated with the atmospheric model in the forced and coupled simulations are appreciably stronger than the re-analyzed winds. They also generate stronger fluxes. The coupled simulation has the strongest surface heat fluxes: the difference in the net heat budget with the oceanic forced simulation reaches on average 50 Wm-2 over the simulation period. Sea surface-temperature cooling is too weak in both simulations, but is improved in the coupled run and matches better the cooling observed with drifters. The spatial distributions of sea surface-temperature cooling and surface fluxes are strongly inhomogeneous over the simulation domain. The amplitude of the flux variation is maximum in the coupled run. Moreover the weak correlation between the cooling and heat flux patterns indicates that the surface fluxes are not responsible for the whole cooling and suggests that the response of the ocean mixed layer to the atmosphere is highly non-local and enhanced in the coupled simulation.

  14. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  15. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  16. Calculation of Non-Bonded Forces Due to Sliding of Bundled Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Frankland, S. J. V.; Bandorawalla, T.; Gates, T. S.

    2003-01-01

    An important consideration for load transfer in bundles of single-walled carbon nanotubes is the nonbonded (van der Waals) forces between the nanotubes and their effect on axial sliding of the nanotubes relative to each other. In this research, the non-bonded forces in a bundle of seven hexagonally packed (10,10) single-walled carbon nanotubes are represented as an axial force applied to the central nanotube. A simple model, based on momentum balance, is developed to describe the velocity response of the central nanotube to the applied force. The model is verified by comparing its velocity predictions with molecular dynamics simulations that were performed on the bundle with different force histories applied to the central nanotube. The model was found to quantitatively predict the nanotube velocities obtained from the molecular dynamics simulations. Both the model and the simulations predict a threshold force at which the nanotube releases from the bundle. This force converts to a shear yield strength of 10.5-11.0 MPa for (10,10) nanotubes in a bundle.

  17. Biogeochemical Protocols and Diagnostics for the CMIP6 Ocean Model Intercomparison Project (OMIP)

    NASA Technical Reports Server (NTRS)

    Orr, James C.; Najjar, Raymond G.; Aumont, Olivier; Bopp, Laurent; Bullister, John L.; Danabasoglu, Gokhan; Doney, Scott C.; Dunne, John P.; Dutay, Jean-Claude; Graven, Heather; hide

    2017-01-01

    The Ocean Model Intercomparison Project (OMIP) focuses on the physics and biogeochemistry of the ocean component of Earth system models participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). OMIP aims to provide standard protocols and diagnostics for ocean models, while offering a forum to promote their common assessment and improvement. It also offers to compare solutions of the same ocean models when forced with reanalysis data (OMIP simulations) vs. when integrated within fully coupled Earth system models (CMIP6). Here we detail simulation protocols and diagnostics for OMIP's biogeochemical and inert chemical tracers. These passive-tracer simulations will be coupled to ocean circulation models, initialized with observational data or output from a model spin-up, and forced by repeating the 1948-2009 surface fluxes of heat, fresh water, and momentum. These so-called OMIP-BGC simulations include three inert chemical tracers (CFC-11, CFC-12, SF [subscript] 6) and biogeochemical tracers (e.g., dissolved inorganic carbon, carbon isotopes, alkalinity, nutrients, and oxygen). Modelers will use their preferred prognostic BGC model but should follow common guidelines for gas exchange and carbonate chemistry. Simulations include both natural and total carbon tracers. The required forced simulation (omip1) will be initialized with gridded observational climatologies. An optional forced simulation (omip1-spunup) will be initialized instead with BGC fields from a long model spin-up, preferably for 2000 years or more, and forced by repeating the same 62-year meteorological forcing. That optional run will also include abiotic tracers of total dissolved inorganic carbon and radiocarbon, CTabio and 14CTabio, to assess deep-ocean ventilation and distinguish the role of physics vs. biology. These simulations will be forced by observed atmospheric histories of the three inert gases and CO2 as well as carbon isotope ratios of CO2. OMIP-BGC simulation protocols are founded on those from previous phases of the Ocean Carbon-Cycle Model Intercomparison Project. They have been merged and updated to reflect improvements concerning gas exchange, carbonate chemistry, and new data for initial conditions and atmospheric gas histories. Code is provided to facilitate their implementation.

  18. Biogeochemical protocols and diagnostics for the CMIP6 Ocean Model Intercomparison Project (OMIP)

    NASA Astrophysics Data System (ADS)

    Orr, James C.; Najjar, Raymond G.; Aumont, Olivier; Bopp, Laurent; Bullister, John L.; Danabasoglu, Gokhan; Doney, Scott C.; Dunne, John P.; Dutay, Jean-Claude; Graven, Heather; Griffies, Stephen M.; John, Jasmin G.; Joos, Fortunat; Levin, Ingeborg; Lindsay, Keith; Matear, Richard J.; McKinley, Galen A.; Mouchet, Anne; Oschlies, Andreas; Romanou, Anastasia; Schlitzer, Reiner; Tagliabue, Alessandro; Tanhua, Toste; Yool, Andrew

    2017-06-01

    The Ocean Model Intercomparison Project (OMIP) focuses on the physics and biogeochemistry of the ocean component of Earth system models participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). OMIP aims to provide standard protocols and diagnostics for ocean models, while offering a forum to promote their common assessment and improvement. It also offers to compare solutions of the same ocean models when forced with reanalysis data (OMIP simulations) vs. when integrated within fully coupled Earth system models (CMIP6). Here we detail simulation protocols and diagnostics for OMIP's biogeochemical and inert chemical tracers. These passive-tracer simulations will be coupled to ocean circulation models, initialized with observational data or output from a model spin-up, and forced by repeating the 1948-2009 surface fluxes of heat, fresh water, and momentum. These so-called OMIP-BGC simulations include three inert chemical tracers (CFC-11, CFC-12, SF6) and biogeochemical tracers (e.g., dissolved inorganic carbon, carbon isotopes, alkalinity, nutrients, and oxygen). Modelers will use their preferred prognostic BGC model but should follow common guidelines for gas exchange and carbonate chemistry. Simulations include both natural and total carbon tracers. The required forced simulation (omip1) will be initialized with gridded observational climatologies. An optional forced simulation (omip1-spunup) will be initialized instead with BGC fields from a long model spin-up, preferably for 2000 years or more, and forced by repeating the same 62-year meteorological forcing. That optional run will also include abiotic tracers of total dissolved inorganic carbon and radiocarbon, CTabio and 14CTabio, to assess deep-ocean ventilation and distinguish the role of physics vs. biology. These simulations will be forced by observed atmospheric histories of the three inert gases and CO2 as well as carbon isotope ratios of CO2. OMIP-BGC simulation protocols are founded on those from previous phases of the Ocean Carbon-Cycle Model Intercomparison Project. They have been merged and updated to reflect improvements concerning gas exchange, carbonate chemistry, and new data for initial conditions and atmospheric gas histories. Code is provided to facilitate their implementation.

  19. The effect of aircraft control forces on pilot performance during instrument landings in a flight simulator.

    PubMed

    Hewson, D J; McNair, P J; Marshall, R N

    2001-07-01

    Pilots may have difficulty controlling aircraft at both high and low force levels due to larger variability in force production at these force levels. The aim of this study was to measure the force variability and landing performance of pilots during an instrument landing in a flight simulator. There were 12 pilots who were tested while performing 5 instrument landings in a flight simulator, each of which required different control force inputs. Pilots can produce the least force when pushing the control column to the right, therefore the force levels for the landings were set relative to each pilot's maximum aileron-right force. The force levels for the landings were 90%, 60%, and 30% of maximal aileron-right force, normal force, and 25% of normal force. Variables recorded included electromyographic activity (EMG), aircraft control forces, aircraft attitude, perceived exertion and deviation from glide slope and heading. Multivariate analysis of variance was used to test for differences between landings. Pilots were least accurate in landing performance during the landing at 90% of maximal force (p < 0.05). There was also a trend toward decreased landing performance during the landing at 25% of normal force. Pilots were more variable in force production during the landings at 60% and 90% of maximal force (p < 0.05). Pilots are less accurate at performing instrument landings when control forces are high due to the increased variability of force production. The increase in variability at high force levels is most likely associated with motor unit recruitment, rather than rate coding. Aircraft designers need to consider the reduction in pilot performance at high force levels, as well as pilot strength limits when specifying new standards.

  20. A predictive bone drilling force model for haptic rendering with experimental validation using fresh cadaveric bone.

    PubMed

    Lin, Yanping; Chen, Huajiang; Yu, Dedong; Zhang, Ying; Yuan, Wen

    2017-01-01

    Bone drilling simulators with virtual and haptic feedback provide a safe, cost-effective and repeatable alternative to traditional surgical training methods. To develop such a simulator, accurate haptic rendering based on a force model is required to feedback bone drilling forces based on user input. Current predictive bone drilling force models based on bovine bones with various drilling conditions and parameters are not representative of the bone drilling process in bone surgery. The objective of this study was to provide a bone drilling force model for haptic rendering based on calibration and validation experiments in fresh cadaveric bones with different bone densities. Using a commonly used drill bit geometry (2 mm diameter), feed rates (20-60 mm/min) and spindle speeds (4000-6000 rpm) in orthognathic surgeries, the bone drilling forces of specimens from two groups were measured and the calibration coefficients of the specific normal and frictional pressures were determined. The comparison of the predicted forces and the measured forces from validation experiments with a large range of feed rates and spindle speeds demonstrates that the proposed bone drilling forces can predict the trends and average forces well. The presented bone drilling force model can be used for haptic rendering in surgical simulators.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Mark J.; Saleh, Omar A.

    We calculated the force-extension curves for a flexible polyelectrolyte chain with varying charge separations by performing Monte Carlo simulations of a 5000 bead chain using a screened Coulomb interaction. At all charge separations, the force-extension curves exhibit a Pincus-like scaling regime at intermediate forces and a logarithmic regime at large forces. As the charge separation increases, the Pincus regime shifts to a larger range of forces and the logarithmic regime starts are larger forces. We also found that force-extension curve for the corresponding neutral chain has a logarithmic regime. Decreasing the diameter of bead in the neutral chain simulations removedmore » the logarithmic regime, and the force-extension curve tends to the freely jointed chain limit. In conclusion, this result shows that only excluded volume is required for the high force logarithmic regime to occur.« less

  2. Resolved granular debris-flow simulations with a coupled SPH-DCDEM model

    NASA Astrophysics Data System (ADS)

    Birjukovs Canelas, Ricardo; Domínguez, José M.; Crespo, Alejandro J. C.; Gómez-Gesteira, Moncho; Ferreira, Rui M. L.

    2016-04-01

    Debris flows represent some of the most relevant phenomena in geomorphological events. Due to the potential destructiveness of such flows, they are the target of a vast amount of research (Takahashi, 2007 and references therein). A complete description of the internal processes of a debris-flow is however still an elusive achievement, explained by the difficulty of accurately measuring important quantities in these flows and developing a comprehensive, generalized theoretical framework capable of describing them. This work addresses the need for a numerical model applicable to granular-fluid mixtures featuring high spatial and temporal resolution, thus capable of resolving the motion of individual particles, including all interparticle contacts. This corresponds to a brute-force approach: by applying simple interaction laws at local scales the macro-scale properties of the flow should be recovered by upscaling. This methodology effectively bypasses the complexity of modelling the intermediate scales by resolving them directly. The only caveat is the need of high performance computing, a demanding but engaging research challenge. The DualSPHysics meshless numerical implementation, based on Smoothed Particle Hydrodynamics (SPH), is expanded with a Distributed Contact Discrete Element Method (DCDEM) in order to explicitly solve the fluid and the solid phase. The model numerically solves the Navier-Stokes and continuity equations for the liquid phase and Newton's motion equations for solid bodies. The interactions between solids are modelled with classical DEM approaches (Kruggel-Emden et al, 2007). Among other validation tests, an experimental set-up for stony debris flows in a slit check dam is reproduced numerically, where solid material is introduced trough a hopper assuring a constant solid discharge for the considered time interval. With each sediment particle undergoing tens of possible contacts, several thousand time-evolving contacts are efficiently treated. Fully periodic boundary conditions allow for the recirculation of the material. The results, comprising mainly of retention curves, are in good agreement with the measurements, correctly reproducing the changes in efficiency with slit spacing and effective density. Ackownledgements: Project RECI/ECM-HID/0371/2012, funded by the Portuguese Foundation for Science and Technology (FCT), has partially supported this work. It was also partially funded by Xunta de Galicia under project Programa de Consolidacion e Estructuracion de Unidades de Investigacion Competitivas (Grupos de Referencia Competitiva), financed by European Regional Development Fund (FEDER) and by Ministerio de Economia y Competitividad under de Project BIA2012-38676-C03-03. References Takahashi, T. Debris Flow, Mechanics, Prediction and Countermeasures. Taylor and Francis, 2007 Kruggel-Emden, H.; Simsek, E.; Rickelt, S.; Wirtz, S. & Scherer, V. Review and extension of normal force models for the Discrete Element Method. Powder Technology , 2007, 171, 157 - 173

  3. Molecular dynamics simulations of a DMSO/water mixture using the AMBER force field.

    PubMed

    Stachura, Slawomir S; Malajczuk, Chris J; Mancera, Ricardo L

    2018-06-25

    Due to its protective properties of biological samples at low temperatures and under desiccation, dimethyl sulfoxide (DMSO) in aqueous solutions has been studied widely by many experimental approaches and molecular dynamics (MD) simulations. In the case of the latter, AMBER is among the most commonly used force fields for simulations of biomolecular systems; however, the parameters for DMSO published by Fox and Kollman in 1998 have only been tested for pure liquid DMSO. We have conducted an MD simulation study of DMSO in a water mixture and computed several structural and dynamical properties such as of the mean density, self-diffusion coefficient, hydrogen bonding and DMSO and water ordering. The AMBER force field of DMSO is seen to reproduce well most of the experimental properties of DMSO in water, with the mixture displaying strong and specific water ordering, as observed in experiments and multiple other MD simulations with other non-polarizable force fields. Graphical abstract Hydration structure within hydrogen-bonding distance around a DMSOmolecule.

  4. Intrinsically Disordered Protein Specific Force Field CHARMM36IDPSFF.

    PubMed

    Liu, Hao; Song, Dong; Lu, Hui; Luo, Ray; Chen, Hai-Feng

    2018-05-28

    Intrinsically disordered proteins (IDPs) are closely related to various human diseases. Because IDPs lack certain tertiary structure, it is difficult to use X-ray and NMR methods to measure their structures. Therefore, molecular dynamics simulation is a useful tool to study the conformer distribution of IDPs. However, most generic protein force fields were found to be insufficient in simulations of IDPs. Here we report our development for the CHARMM community. Our residue-specific IDP force field (CHARMM36IDPSFF) was developed based on the base generic force field with CMAP corrections of for all 20 naturally occurring amino acids. Multiple tests show that the simulated chemical shifts with the newly developed force field are in quantitative agreement with NMR experiment and are more accurate than the base generic force field. Comparison of J-couplings with previous work shows that CHARMM36IDPSFF and its corresponding base generic force field have their own advantages. In addition, CHARMM36IDPSFF simulations also agree with experiment for SAXS profiles and radii of gyration of IDPs. Detailed analysis shows that CHARMM36IDPSFF can sample more diverse and disordered conformers. These findings confirm that the newly developed force field can improve the balance of accuracy and efficiency for the conformer sampling of IDPs. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Piloted Simulation Study of Rudder Pedal Force/Feel Characteristics

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    2007-01-01

    A piloted, fixed-base simulation was conducted in 2006 to determine optimum rudder pedal force/feel characteristics for transport aircraft. As part of this research, an evaluation of four metrics for assessing rudder pedal characteristics previously presented in the literature was conducted. This evaluation was based upon the numerical handling qualities ratings assigned to a variety of pedal force/feel systems used in the simulation study. It is shown that, with the inclusion of a fifth metric, most of the rudder pedal force/feel system designs that were rated poorly by the evaluation pilots could be identified. It is suggested that these metrics form the basis of a certification requirement for transport aircraft.

  6. Attractive particle interaction forces and packing density of fine glass powders

    PubMed Central

    Parteli, Eric J. R.; Schmidt, Jochen; Blümel, Christina; Wirth, Karl-Ernst; Peukert, Wolfgang; Pöschel, Thorsten

    2014-01-01

    We study the packing of fine glass powders of mean particle diameter in the range (4–52) μm both experimentally and by numerical DEM simulations. We obtain quantitative agreement between the experimental and numerical results, if both types of attractive forces of particle interaction, adhesion and non-bonded van der Waals forces are taken into account. Our results suggest that considering only viscoelastic and adhesive forces in DEM simulations may lead to incorrect numerical predictions of the behavior of fine powders. Based on the results from simulations and experiments, we propose a mathematical expression to estimate the packing fraction of fine polydisperse powders as a function of the average particle size. PMID:25178812

  7. TMFF-A Two-Bead Multipole Force Field for Coarse-Grained Molecular Dynamics Simulation of Protein.

    PubMed

    Li, Min; Liu, Fengjiao; Zhang, John Z H

    2016-12-13

    Coarse-grained (CG) models are desirable for studying large and complex biological systems. In this paper, we propose a new two-bead multipole force field (TMFF) in which electric multipoles up to the quadrupole are included in the CG force field. The inclusion of electric multipoles in the proposed CG force field enables a more realistic description of the anisotropic electrostatic interactions in the protein system and, thus, provides an improvement over the standard isotropic two-bead CG models. In order to test the accuracy of the new CG force field model, extensive molecular dynamics simulations were carried out for a series of benchmark protein systems. These simulation studies showed that the TMFF model can realistically reproduce the structural and dynamical properties of proteins, as demonstrated by the close agreement of the CG results with those from the corresponding all-atom simulations in terms of root-mean-square deviations (RMSDs) and root-mean-square fluctuations (RMSFs) of the protein backbones. The current two-bead model is highly coarse-grained and is 50-fold more efficient than all-atom method in MD simulation of proteins in explicit water.

  8. Molecular simulation of gas adsorption and diffusion in a breathing MOF using a rigid force field.

    PubMed

    García-Pérez, E; Serra-Crespo, P; Hamad, S; Kapteijn, F; Gascon, J

    2014-08-14

    Simulation of gas adsorption in flexible porous materials is still limited by the slow progress in the development of flexible force fields. Moreover, the high computational cost of such flexible force fields may be a drawback even when they are fully developed. In this work, molecular simulations of gas adsorption and diffusion of carbon dioxide and methane in NH2-MIL-53(Al) are carried out using a linear combination of two crystallographic structures with rigid force fields. Once the interactions of carbon dioxide molecules and the bridging hydroxyls groups of the framework are optimized, an excellent match is found for simulations and experimental data for the adsorption of methane and carbon dioxide, including the stepwise uptake due to the breathing effect. In addition, diffusivities of pure components are calculated. The pore expansion by the breathing effect influences the self-diffusion mechanism and much higher diffusivities are observed at relatively high adsorbate loadings. This work demonstrates that using a rigid force field combined with a minimum number of experiments, reproduces adsorption and simulates diffusion of carbon dioxide and methane in the flexible metal-organic framework NH2-MIL-53(Al).

  9. A Highly Parallelized Special-Purpose Computer for Many-Body Simulations with an Arbitrary Central Force: MD-GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro

    1996-09-01

    We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.

  10. Three-Dimensional Muscle Architecture and Comprehensive Dynamic Properties of Rabbit Gastrocnemius, Plantaris and Soleus: Input for Simulation Studies

    PubMed Central

    Siebert, Tobias; Leichsenring, Kay; Rode, Christian; Wick, Carolin; Stutzig, Norman; Schubert, Harald; Blickhan, Reinhard; Böl, Markus

    2015-01-01

    The vastly increasing number of neuro-muscular simulation studies (with increasing numbers of muscles used per simulation) is in sharp contrast to a narrow database of necessary muscle parameters. Simulation results depend heavily on rough parameter estimates often obtained by scaling of one muscle parameter set. However, in vivo muscles differ in their individual properties and architecture. Here we provide a comprehensive dataset of dynamic (n = 6 per muscle) and geometric (three-dimensional architecture, n = 3 per muscle) muscle properties of the rabbit calf muscles gastrocnemius, plantaris, and soleus. For completeness we provide the dynamic muscle properties for further important shank muscles (flexor digitorum longus, extensor digitorum longus, and tibialis anterior; n = 1 per muscle). Maximum shortening velocity (normalized to optimal fiber length) of the gastrocnemius is about twice that of soleus, while plantaris showed an intermediate value. The force-velocity relation is similar for gastrocnemius and plantaris but is much more bent for the soleus. Although the muscles vary greatly in their three-dimensional architecture their mean pennation angle and normalized force-length relationships are almost similar. Forces of the muscles were enhanced in the isometric phase following stretching and were depressed following shortening compared to the corresponding isometric forces. While the enhancement was independent of the ramp velocity, the depression was inversely related to the ramp velocity. The lowest effect strength for soleus supports the idea that these effects adapt to muscle function. The careful acquisition of typical dynamical parameters (e.g. force-length and force-velocity relations, force elongation relations of passive components), enhancement and depression effects, and 3D muscle architecture of calf muscles provides valuable comprehensive datasets for e.g. simulations with neuro-muscular models, development of more realistic muscle models, or simulation of muscle packages. PMID:26114955

  11. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  12. Deep eutectic solvent formation: a structural view using molecular dynamics simulations with classical force fields

    NASA Astrophysics Data System (ADS)

    Mainberger, Sebastian; Kindlein, Moritz; Bezold, Franziska; Elts, Ekaterina; Minceva, Mirjana; Briesen, Heiko

    2017-06-01

    Deep eutectic solvents (DES) have gained a reputation as inexpensive and easy to handle ionic liquid analogues. This work employs molecular dynamics (MD) to simulate a variety of DES. The hydrogen bond acceptor (HBA) choline chloride was paired with the hydrogen bond donors (HBD) glycerol, 1,4-butanediol, and levulinic acid. Levulinic acid was also paired with the zwitterionic HBA betaine. In order to evaluate the reliability of data MD simulations can provide for DES, two force fields were compared: the Merck Molecular Force Field and the General Amber Force Field with two different sets of partial charges for the latter. The force fields were evaluated by comparing available experimental thermodynamic and transport properties against simulated values. Structural analysis was performed on the eutectic systems and compared to non-eutectic compositions. All force fields could be validated against certain experimental properties, but performance varied depending on the system and property in question. While extensive hydrogen bonding was found for all systems, details about the contribution of individual groups strongly varied among force fields. Interaction potentials revealed that HBA-HBA interactions weaken linearly with increasing HBD ratio, while HBD-HBD interactions grew disproportionally in magnitude, which might hint at the eutectic composition of a system.

  13. Ligand Binding: Molecular Mechanics Calculation of the Streptavidin-Biotin Rupture Force

    NASA Astrophysics Data System (ADS)

    Grubmuller, Helmut; Heymann, Berthold; Tavan, Paul

    1996-02-01

    The force required to rupture the streptavidin-biotin complex was calculated here by computer simulations. The computed force agrees well with that obtained by recent single molecule atomic force microscope experiments. These simulations suggest a detailed multiple-pathway rupture mechanism involving five major unbinding steps. Binding forces and specificity are attributed to a hydrogen bond network between the biotin ligand and residues within the binding pocket of streptavidin. During rupture, additional water bridges substantially enhance the stability of the complex and even dominate the binding inter-actions. In contrast, steric restraints do not appear to contribute to the binding forces, although conformational motions were observed.

  14. Characterization of mechanical unfolding intermediates of membrane proteins by coarse grained molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Yamada, Tatsuya; Mitaku, Shigeki; Yamato, Takahisa

    2018-01-01

    Single-molecule force spectroscopy by atomic force microscopy allows us to get insight into the mechanical unfolding of membrane proteins, and a typical experiment exhibits characteristic patterns on the force distance curves. The origin of these patterns, however, has not been fully understood yet. We performed coarse-grained simulation of the forced unfolding of halorodopsin, reproduced the characteristic features of the experimental force distance curves. A further examination near the membrane-water interface indicated the existence of a motif for the force peak formation, i.e., the occurrence of hydrophobic residues in the upper interface region and hydrophilic residues below the lower interface region.

  15. Force-Induced Unravelling of DNA Origami.

    PubMed

    Engel, Megan C; Smith, David M; Jobst, Markus A; Sajfutdinow, Martin; Liedl, Tim; Romano, Flavio; Rovigatti, Lorenzo; Louis, Ard A; Doye, Jonathan P K

    2018-05-31

    The mechanical properties of DNA nanostructures are of widespread interest as applications that exploit their stability under constant or intermittent external forces become increasingly common. We explore the force response of DNA origami in comprehensive detail by combining AFM single molecule force spectroscopy experiments with simulations using oxDNA, a coarse-grained model of DNA at the nucleotide level, to study the unravelling of an iconic origami system: the Rothemund tile. We contrast the force-induced melting of the tile with simulations of an origami 10-helix bundle. Finally, we simulate a recently-proposed origami biosensor, whose function takes advantage of origami behaviour under tension. We observe characteristic stick-slip unfolding dynamics in our force-extension curves for both the Rothemund tile and the helix bundle and reasonable agreement with experimentally observed rupture forces for these systems. Our results highlight the effect of design on force response: we observe regular, modular unfolding for the Rothemund tile that contrasts with strain-softening of the 10-helix bundle which leads to catastropic failure under monotonically increasing force. Further, unravelling occurs straightforwardly from the scaffold ends inwards for the Rothemund tile, while the helix bundle unfolds more nonlinearly. The detailed visualization of the yielding events provided by simulation allows preferred pathways through the complex unfolding free-energy landscape to be mapped, as a key factor in determining relative barrier heights is the extensional release per base pair broken. We shed light on two important questions: how stable DNA nanostructures are under external forces; and what design principles can be applied to enhance stability.

  16. Predicting the effects of muscle activation on knee, thigh, and hip injuries in frontal crashes using a finite-element model with muscle forces from subject testing and musculoskeletal modeling.

    PubMed

    Chang, Chia-Yuan; Rupp, Jonathan D; Reed, Matthew P; Hughes, Richard E; Schneider, Lawrence W

    2009-11-01

    In a previous study, the authors reported on the development of a finite-element model of the midsize male pelvis and lower extremities with lower-extremity musculature that was validated using PMHS knee-impact response data. Knee-impact simulations with this model were performed using forces from four muscles in the lower extremities associated with two-foot bracing reported in the literature to provide preliminary estimates of the effects of lower-extremity muscle activation on knee-thigh-hip injury potential in frontal impacts. The current study addresses a major limitation of these preliminary simulations by using the AnyBody three-dimensional musculoskeletal model to estimate muscle forces produced in 35 muscles in each lower extremity during emergency one-foot braking. To check the predictions of the AnyBody Model, activation levels of twelve major muscles in the hip and lower extremities were measured using surface EMG electrodes on 12 midsize-male subjects performing simulated maximum and 50% of maximum braking in a laboratory seating buck. Comparisons between test results and the predictions of the AnyBody Model when it was used to simulate these same braking tests suggest that the AnyBody model appropriately predicts agonistic muscle activations but under predicts antagonistic muscle activations. Simulations of knee-to-knee-bolster impacts were performed by impacting the knees of the lower-extremity finite element model with and without the muscle forces predicted by the validated AnyBody Model. Results of these simulations confirm previous findings that muscle tension increases knee-impact force by increasing the effective mass of the KTH complex due to tighter coupling of muscle mass to bone. They also indicate that muscle activation preferentially couples mass distal to the hip, thereby accentuating the decrease in femur force from the knee to the hip. However, the reduction in force transmitted from the knee to the hip is offset by the increased force at the knee and by increased compressive forces at the hip due to activation of lower-extremity muscles. As a result, approximately 45% to 60% and 50% to 65% of the force applied to the knee is applied to the hip in the simulations without and with muscle tension, respectively. The simulation results suggest that lower-extremity muscle tension has little effect on the risk of hip injuries, but it increases the bending moments in the femoral shaft, thereby increasing the risk of femoral shaft fractures by 20%-40%. However, these findings may be affected by the inability of the AnyBody Model to appropriately predict antagonistic muscle forces.

  17. Deformation of Soft Tissue and Force Feedback Using the Smoothed Particle Hydrodynamics

    PubMed Central

    Liu, Xuemei; Wang, Ruiyi; Li, Yunhua; Song, Dongdong

    2015-01-01

    We study the deformation and haptic feedback of soft tissue in virtual surgery based on a liver model by using a force feedback device named PHANTOM OMNI developed by SensAble Company in USA. Although a significant amount of research efforts have been dedicated to simulating the behaviors of soft tissue and implementing force feedback, it is still a challenging problem. This paper introduces a kind of meshfree method for deformation simulation of soft tissue and force computation based on viscoelastic mechanical model and smoothed particle hydrodynamics (SPH). Firstly, viscoelastic model can present the mechanical characteristics of soft tissue which greatly promotes the realism. Secondly, SPH has features of meshless technique and self-adaption, which supply higher precision than methods based on meshes for force feedback computation. Finally, a SPH method based on dynamic interaction area is proposed to improve the real time performance of simulation. The results reveal that SPH methodology is suitable for simulating soft tissue deformation and force feedback calculation, and SPH based on dynamic local interaction area has a higher computational efficiency significantly compared with usual SPH. Our algorithm has a bright prospect in the area of virtual surgery. PMID:26417380

  18. Assessment of a flow-through balance for hypersonic wind tunnel models with scramjet exhaust flow simulation

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Kniskern, Marc W.; Monta, William J.

    1993-01-01

    The purpose of this investigation were twofold: first, to determine whether accurate force and moment data could be obtained during hypersonic wind tunnel tests of a model with a scramjet exhaust flow simulation that uses a representative nonwatercooled, flow-through balance; second, to analyze temperature time histories on various parts of the balance to address thermal effects on force and moment data. The tests were conducted in the NASA Langley Research Center 20-Inch Mach 6 Wind Tunnel at free-stream Reynolds numbers ranging from 0.5 to 7.4 x 10(exp 6)/ft and nominal angles of attack of -3.5 deg, 0 deg, and 5 deg. The simulant exhaust gases were cold air, hot air, and a mixture of 50 percent Argon and 50 percent Freon by volume, which reached stagnation temperatures within the balance of 111, 214, and 283 F, respectively. All force and moment values were unaffected by the balance thermal response from exhaust gas simulation and external aerodynamic heating except for axial-force measurements, which were significantly affected by balance heating. This investigation showed that for this model at the conditions tested, a nonwatercooled, flow-through balance is not suitable for axial-force measurements during scramjet exhaust flow simulation tests at hypersonic speeds. In general, heated exhaust gas may produce unacceptable force and moment uncertainties when used with thermally sensitive balances.

  19. Experimental studies of protozoan response to intense magnetic fields and forces

    NASA Astrophysics Data System (ADS)

    Guevorkian, Karine

    Intense static magnetic fields of up to 31 Tesla were used as a novel tool to manipulate the swimming mechanics of unicellular organisms. It is shown that homogenous magnetic fields alter the swimming trajectories of the single cell protozoan Paramecium caudatum, by aligning them parallel to the applied field. Immobile neutrally buoyant paramecia also oriented in magnetic fields with similar rates as the motile ones. It was established that the magneto-orientation is mostly due to the magnetic torques acting on rigid structures in the cell body and therefore the response is a non-biological, passive response. From the orientation rate of paramecia in various magnetic field strengths, the average anisotropy of the diamagnetic susceptibility of the cell was estimated. It has also been demonstrated that magnetic forces can be used to create increased, decreased and even inverted simulated gravity environments for the investigation of the gravi-responses of single cells. Since the mechanisms by which Earth's gravity affects cell functioning are still not fully understood, a number of methods to simulate different strength gravity environments, such as centrifugation, have been employed. Exploiting the ability to exert magnetic forces on weakly diamagnetic constituents of the cells, we were able to vary the gravity from -8 g to 10 g, where g is Earth's gravity. Investigations of the swimming response of paramecia in these simulated gravities revealed that they actively regulate their swimming speed to oppose the external force. This result is in agreement with centrifugation experiments, confirming the credibility of the technique. Moreover, the Paramecium's swimming ceased in simulated gravity of 10 g, indicating a maximum possible propulsion force of 0.7 nN. The magnetic force technique to simulate gravity is the only earthbound technique that can create increased and decreased simulated gravities in the same experimental setup. These findings establish a general technique for applying continuously variable forces to cells or cell populations suitable for exploring their force transduction mechanisms.

  20. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  1. Numeric simulation model for long-term orthodontic tooth movement with contact boundary conditions using the finite element method.

    PubMed

    Hamanaka, Ryo; Yamaoka, Satoshi; Anh, Tuan Nguyen; Tominaga, Jun-Ya; Koga, Yoshiyuki; Yoshida, Noriaki

    2017-11-01

    Although many attempts have been made to simulate orthodontic tooth movement using the finite element method, most were limited to analyses of the initial displacement in the periodontal ligament and were insufficient to evaluate the effect of orthodontic appliances on long-term tooth movement. Numeric simulation of long-term tooth movement was performed in some studies; however, neither the play between the brackets and archwire nor the interproximal contact forces were considered. The objectives of this study were to simulate long-term orthodontic tooth movement with the edgewise appliance by incorporating those contact conditions into the finite element model and to determine the force system when the space is closed with sliding mechanics. We constructed a 3-dimensional model of maxillary dentition with 0.022-in brackets and 0.019 × 0.025-in archwire. Forces of 100 cN simulating sliding mechanics were applied. The simulation was accomplished on the assumption that bone remodeling correlates with the initial tooth displacement. This method could successfully represent the changes in the moment-to-force ratio: the tooth movement pattern during space closure. We developed a novel method that could simulate the long-term orthodontic tooth movement and accurately determine the force system in the course of time by incorporating contact boundary conditions into finite element analysis. It was also suggested that friction is progressively increased during space closure in sliding mechanics. Copyright © 2017. Published by Elsevier Inc.

  2. Free energy simulations with the AMOEBA polarizable force field and metadynamics on GPU platform.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Guohui

    2016-03-05

    The free energy calculation library PLUMED has been incorporated into the OpenMM simulation toolkit, with the purpose to perform enhanced sampling MD simulations using the AMOEBA polarizable force field on GPU platform. Two examples, (I) the free energy profile of water pair separation (II) alanine dipeptide dihedral angle free energy surface in explicit solvent, are provided here to demonstrate the accuracy and efficiency of our implementation. The converged free energy profiles could be obtained within an affordable MD simulation time when the AMOEBA polarizable force field is employed. Moreover, the free energy surfaces estimated using the AMOEBA polarizable force field are in agreement with those calculated from experimental data and ab initio methods. Hence, the implementation in this work is reliable and would be utilized to study more complicated biological phenomena in both an accurate and efficient way. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  3. Current target acquisition methodology in force on force simulations

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Miller, Brian; Mazz, John P.

    2017-05-01

    The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military community in force on force simulations for training, testing, and analysis. There have been significant improvements to these models over the past few years. The significant improvements are the transition of ACQUIRE TTP-TAS (ACQUIRE Targeting Task Performance Target Angular Size) methodology for all imaging sensors and the development of new discrimination criteria for urban environments and humans. This paper is intended to provide an overview of the current target acquisition modeling approach and provide data for the new discrimination tasks. This paper will discuss advances and changes to the models and methodologies used to: (1) design and compare sensors' performance, (2) predict expected target acquisition performance in the field, (3) predict target acquisition performance for combat simulations, and (4) how to conduct model data validation for combat simulations.

  4. A relationship between three-dimensional surface hydration structures and force distribution measured by atomic force microscopy.

    PubMed

    Miyazawa, Keisuke; Kobayashi, Naritaka; Watkins, Matthew; Shluger, Alexander L; Amano, Ken-ichi; Fukuma, Takeshi

    2016-04-07

    Hydration plays important roles in various solid-liquid interfacial phenomena. Very recently, three-dimensional scanning force microscopy (3D-SFM) has been proposed as a tool to visualise solvated surfaces and their hydration structures with lateral and vertical (sub) molecular resolution. However, the relationship between the 3D force map obtained and the equilibrium water density, ρ(r), distribution above the surface remains an open question. Here, we investigate this relationship at an interface of an inorganic mineral, fluorite, and water. The force maps measured in pure water are directly compared to force maps generated using the solvent tip approximation (STA) model and from explicit molecular dynamics simulations. The results show that the simulated STA force map describes the major features of the experimentally obtained force image. The agreement between the STA data and the experiment establishes the correspondence between the water density used as an input to the STA model and the experimental hydration structure and thus provides a tool to bridge the experimental force data and atomistic solvation structures. Further applications of this method should improve the accuracy and reliability of both interpretation of 3D-SFM force maps and atomistic simulations in a wide range of solid-liquid interfacial phenomena.

  5. Let's get honest about sampling.

    PubMed

    Mobley, David L

    2012-01-01

    Molecular simulations see widespread and increasing use in computation and molecular design, especially within the area of molecular simulations applied to biomolecular binding and interactions, our focus here. However, force field accuracy remains a concern for many practitioners, and it is often not clear what level of accuracy is really needed for payoffs in a discovery setting. Here, I argue that despite limitations of today's force fields, current simulation tools and force fields now provide the potential for real benefits in a variety of applications. However, these same tools also provide irreproducible results which are often poorly interpreted. Continued progress in the field requires more honesty in assessment and care in evaluation of simulation results, especially with respect to convergence.

  6. Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics

    PubMed Central

    Baumketner, Andrij

    2009-01-01

    The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522

  7. Solvation of fluoro-acetonitrile in water by 2D-IR spectroscopy: A combined experimental-computational study

    NASA Astrophysics Data System (ADS)

    Cazade, Pierre-André; Tran, Halina; Bereau, Tristan; Das, Akshaya K.; Kläsi, Felix; Hamm, Peter; Meuwly, Markus

    2015-06-01

    The solvent dynamics around fluorinated acetonitrile is characterized by 2-dimensional infrared spectroscopy and atomistic simulations. The lineshape of the linear infrared spectrum is better captured by semiempirical (density functional tight binding) mixed quantum mechanical/molecular mechanics simulations, whereas force field simulations with multipolar interactions yield lineshapes that are significantly too narrow. For the solvent dynamics, a relatively slow time scale of 2 ps is found from the experiments and supported by the mixed quantum mechanical/molecular mechanics simulations. With multipolar force fields fitted to the available thermodynamical data, the time scale is considerably faster—on the 0.5 ps time scale. The simulations provide evidence for a well established CF-HOH hydrogen bond (population of 25%) which is found from the radial distribution function g(r) from both, force field and quantum mechanics/molecular mechanics simulations.

  8. Relationship between jump landing kinematics and peak ACL force during a jump in downhill skiing: a simulation study.

    PubMed

    Heinrich, D; van den Bogert, A J; Nachbauer, W

    2014-06-01

    Recent data highlight that competitive skiers face a high risk of injuries especially during off-balance jump landing maneuvers in downhill skiing. The purpose of the present study was to develop a musculo-skeletal modeling and simulation approach to investigate the cause-and-effect relationship between a perturbed landing position, i.e., joint angles and trunk orientation, and the peak force in the anterior cruciate ligament (ACL) during jump landing. A two-dimensional musculo-skeletal model was developed and a baseline simulation was obtained reproducing measurement data of a reference landing movement. Based on the baseline simulation, a series of perturbed landing simulations (n = 1000) was generated. Multiple linear regression was performed to determine a relationship between peak ACL force and the perturbed landing posture. Increased backward lean, hip flexion, knee extension, and ankle dorsiflexion as well as an asymmetric position were related to higher peak ACL forces during jump landing. The orientation of the trunk of the skier was identified as the most important predictor accounting for 60% of the variance of the peak ACL force in the simulations. Teaching of tactical decisions and the inclusion of exercise regimens in ACL injury prevention programs to improve trunk control during landing motions in downhill skiing was concluded. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Characteristics on electodynamic suspension simulator with HTS levitation magnet

    NASA Astrophysics Data System (ADS)

    Lee, J.; Bae, D. K.; Sim, K.; Chung, Y. D.; Lee, Y.-S.

    2009-10-01

    High- Tc superconducting (HTSC) electrodynamic suspension (EDS) system basically consists of the HTSC levitation magnet and the ground conductor. The levitation force of EDS system is forms by the interaction between the moving magnetic field produced by the onboard levitation magnet and the induced magnetic field produced by eddy current in the ground conductor. This paper deals with the characteristics of the EDS simulators with high- Tc superconducting (HTS) levitation magnet. Two EDS simulator systems, rotating type EDS simulator and static type EDS simulator, were studied in this paper. The rotating type EDS simulator consists of a HTS levitation magnet and a 1.5 m diameter rotating ground conductor, a motor, the supporting structure and force measuring devices. In the static type EDS simulator, instead of moving magnetic field, AC current was applied to the fixed HTS levitation magnet to induce the eddy current. The static type EDS simulator consists of a HTS levitation magnet, a ground conductor, force measuring devices and supporting structure. The double-pancake type HTSC levitation magnet was designed, manufactured and tested in the EDS simulator.

  10. Reduction of vibration forces transmitted from a radiator cooling fan to a vehicle body

    NASA Astrophysics Data System (ADS)

    Lim, Jonghyuk; Sim, Woojeong; Yun, Seen; Lee, Dongkon; Chung, Jintai

    2018-04-01

    This article presents methods for reducing transmitted vibration forces caused by mass unbalance of the radiator cooling fan during vehicle idling. To identify the effects of mass unbalance upon the vibration characteristics, vibration signals of the fan blades were experimentally measured both with and without an added mass. For analyzing the vibration forces transmitted to the vehicle body, a dynamic simulation model was established that reflected the vibration characteristics of the actual system. This process included a method described herein for calculating the equivalent stiffness and the equivalent damping of the shroud stators and rubber mountings. The dynamic simulation model was verified by comparing its results with experimental results of the radiator cooling fan. The dynamic simulation model was used to analyze the transmitted vibration forces at the rubber mountings. Also, a measure was established to evaluate the effects of varying the design parameters upon the transmitted vibration forces. We present design guidelines based on these analyses to reduce the transmitted vibration forces of the radiator cooling fan.

  11. Modeling of Aerodynamic Force Acting in Tunnel for Analysis of Riding Comfort in a Train

    NASA Astrophysics Data System (ADS)

    Kikko, Satoshi; Tanifuji, Katsuya; Sakanoue, Kei; Nanba, Kouichiro

    In this paper, we aimed to model the aerodynamic force that acts on a train running at high speed in a tunnel. An analytical model of the aerodynamic force is developed from pressure data measured on car-body sides of a test train running at the maximum revenue operation speed. The simulation of an 8-car train running while being subjected to the modeled aerodynamic force gives the following results. The simulated car-body vibration corresponds to the actual vibration both qualitatively and quantitatively for the cars at the rear of the train. The separation of the airflow at the tail-end of the train increases the yawing vibration of the tail-end car while it has little effect on the car-body vibration of the adjoining car. Also, the effect of the moving velocity of the aerodynamic force on the car-body vibration is clarified that the simulation under the assumption of a stationary aerodynamic force can markedly increase the car-body vibration.

  12. DNA Polymorphism: A Comparison of Force Fields for Nucleic Acids

    PubMed Central

    Reddy, Swarnalatha Y.; Leclerc, Fabrice; Karplus, Martin

    2003-01-01

    The improvements of the force fields and the more accurate treatment of long-range interactions are providing more reliable molecular dynamics simulations of nucleic acids. The abilities of certain nucleic acid force fields to represent the structural and conformational properties of nucleic acids in solution are compared. The force fields are AMBER 4.1, BMS, CHARMM22, and CHARMM27; the comparison of the latter two is the primary focus of this paper. The performance of each force field is evaluated first on its ability to reproduce the B-DNA decamer d(CGATTAATCG)2 in solution with simulations in which the long-range electrostatics were treated by the particle mesh Ewald method; the crystal structure determined by Quintana et al. (1992) is used as the starting point for all simulations. A detailed analysis of the structural and solvation properties shows how well the different force fields can reproduce sequence-specific features. The results are compared with data from experimental and previous theoretical studies. PMID:12609851

  13. Direct Numerical Simulations of Particle-Laden Turbulent Channel Flow

    NASA Astrophysics Data System (ADS)

    Jebakumar, Anand Samuel; Premnath, Kannan; Abraham, John

    2017-11-01

    In a recent experimental study, Lau and Nathan (2014) reported that the distribution of particles in a turbulent pipe flow is strongly influenced by the Stokes number (St). At St lower than 1, particles migrate toward the wall and at St greater than 10 they tend to migrate toward the axis. It was suggested that this preferential migration of particles is due to two forces, the Saffman lift force and the turbophoretic force. Saffman lift force represents a force acting on the particle as a result of a velocity gradient across the particle when it leads or lags the fluid flow. Turbophoretic force is induced by turbulence which tends to move the particle in the direction of decreasing turbulent kinetic energy. In this study, the Lattice Boltzmann Method (LBM) is employed to simulate a particle-laden turbulent channel flow through Direct Numerical Simulations (DNS). We find that the preferential migration is a function of particle size in addition to the St. We explain the effect of the particle size and St on the Saffman lift force and turbophoresis and present how this affects particle concentration at different conditions.

  14. eVolv2k: A new ice core-based volcanic forcing reconstruction for the past 2000 years

    NASA Astrophysics Data System (ADS)

    Toohey, Matthew; Sigl, Michael

    2016-04-01

    Radiative forcing resulting from stratospheric aerosols produced by major volcanic eruptions is a dominant driver of climate variability in the Earth's past. The ability of climate model simulations to accurately recreate past climate is tied directly to the accuracy of the volcanic forcing timeseries used in the simulations. We present here a new volcanic forcing reconstruction, based on newly updated ice core composites from Antarctica and Greenland. Ice core records are translated into stratospheric aerosol properties for use in climate models through the Easy Volcanic Aerosol (EVA) module, which provides an analytic representation of volcanic stratospheric aerosol forcing based on available observations and aerosol model results, prescribing the aerosol's radiative properties and primary modes of spatial and temporal variability. The evolv2k volcanic forcing dataset covers the past 2000 years, and has been provided for use in the Paleo-Modeling Intercomparison Project (PMIP), and VolMIP experiments within CMIP6. Here, we describe the construction of the eVolv2k data set, compare with prior forcing sets, and show initial simulation results.

  15. The preliminary checkout, evaluation and calibration of a 3-component force measurement system for calibrating propulsion simulators for wind tunnel models

    NASA Technical Reports Server (NTRS)

    Scott, W. A.

    1984-01-01

    The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.

  16. Using the Weak-Temperature Gradient Approximation to Evaluate Parameterizations: An Example of the Transition From Suppressed to Active Convection

    NASA Astrophysics Data System (ADS)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    2017-10-01

    Two single-column models are fully coupled via the weak-temperature gradient approach. The coupled-SCM is used to simulate the transition from suppressed to active convection under the influence of an interactive large-scale circulation. The sensitivity of this transition to the value of mixing entrainment within the convective parameterization is explored. The results from these simulations are compared with those from equivalent simulations using coupled cloud-resolving models. Coupled-column simulations over nonuniform surface forcing are used to initialize the simulations of the transition, in which the column with suppressed convection is forced to undergo a transition to active convection by changing the local and/or remote surface forcings. The direct contributions from the changes in surface forcing are to induce a weakening of the large-scale circulation which systematically modulates the transition. In the SCM, the contributions from the large-scale circulation are dominated by the heating effects, while in the CRM the heating and moistening effects are about equally divided. A transition time is defined as the time when the rain rate in the dry column is halfway to the value at equilibrium after the transition. For the control value of entrainment, the order of the transition times is identical to that obtained in the CRM, but the transition times are markedly faster. The locally forced transition is strongly delayed by a higher entrainment. A consequence is that for a 50% higher entrainment the transition times are reordered. The remotely forced transition remains fast while the locally forced transition becomes slow, compared to the CRM.

  17. GCM simulations of volcanic aerosol forcing. I - Climate changes induced by steady-state perturbations

    NASA Technical Reports Server (NTRS)

    Pollack, James B.; Rind, David; Lacis, Andrew; Hansen, James E.; Sato, Makiko; Ruedy, Reto

    1993-01-01

    The response of the climate system to a temporally and spatially constant amount of volcanic particles is simulated using a general circulation model (GCM). The optical depth of the aerosols is chosen so as to produce approximately the same amount of forcing as results from doubling the present CO2 content of the atmosphere and from the boundary conditions associated with the peak of the last ice age. The climate changes produced by long-term volcanic aerosol forcing are obtained by differencing this simulation and one made for the present climate with no volcanic aerosol forcing. The simulations indicate that a significant cooling of the troposphere and surface can occur at times of closely spaced multiple sulfur-rich volcanic explosions that span time scales of decades to centuries. The steady-state climate response to volcanic forcing includes a large expansion of sea ice, especially in the Southern Hemisphere; a resultant large increase in surface and planetary albedo at high latitudes; and sizable changes in the annually and zonally averaged air temperature.

  18. A consistent S-Adenosylmethionine force field improved by dynamic Hirshfeld-I atomic charges for biomolecular simulation

    NASA Astrophysics Data System (ADS)

    Saez, David Adrian; Vöhringer-Martinez, Esteban

    2015-10-01

    S-Adenosylmethionine (AdoMet) is involved in many biological processes as cofactor in enzymes transferring its sulfonium methyl group to various substrates. Additionally, it is used as drug and nutritional supplement to reduce the pain in osteoarthritis and against depression. Due to the biological relevance of AdoMet it has been part of various computational simulation studies and will also be in the future. However, to our knowledge no rigorous force field parameter development for its simulation in biological systems has been reported. Here, we use electronic structure calculations combined with molecular dynamics simulations in explicit solvent to develop force field parameters compatible with the AMBER99 force field. Additionally, we propose new dynamic Hirshfeld-I atomic charges which are derived from the polarized electron density of AdoMet in aqueous solution to describe its electrostatic interactions in biological systems. The validation of the force field parameters and the atomic charges is performed against experimental interproton NOE distances of AdoMet in aqueous solution and crystal structures of AdoMet in the cavity of three representative proteins.

  19. Tackling force-field bias in protein folding simulations: folding of Villin HP35 and Pin WW domains in explicit water.

    PubMed

    Mittal, Jeetain; Best, Robert B

    2010-08-04

    The ability to fold proteins on a computer has highlighted the fact that existing force fields tend to be biased toward a particular type of secondary structure. Consequently, force fields for folding simulations are often chosen according to the native structure, implying that they are not truly "transferable." Here we show that, while the AMBER ff03 potential is known to favor helical structures, a simple correction to the backbone potential (ff03( *)) results in an unbiased energy function. We take as examples the 35-residue alpha-helical Villin HP35 and 37 residue beta-sheet Pin WW domains, which had not previously been folded with the same force field. Starting from unfolded configurations, simulations of both proteins in Amber ff03( *) in explicit solvent fold to within 2.0 A RMSD of the experimental structures. This demonstrates that a simple backbone correction results in a more transferable force field, an important requirement if simulations are to be used to interpret folding mechanism. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Direct folding simulation of helical proteins using an effective polarizable bond force field.

    PubMed

    Duan, Lili; Zhu, Tong; Ji, Changge; Zhang, Qinggang; Zhang, John Z H

    2017-06-14

    We report a direct folding study of seven helical proteins (, Trpcage, , C34, N36, , ) ranging from 17 to 53 amino acids through standard molecular dynamics simulations using a recently developed polarizable force field-Effective Polarizable Bond (EPB) method. The backbone RMSDs, radius of gyrations, native contacts and native helix content are in good agreement with the experimental results. Cluster analysis has also verified that these folded structures with the highest population are in good agreement with their corresponding native structures for these proteins. In addition, the free energy landscape of seven proteins in the two dimensional space comprised of RMSD and radius of gyration proved that these folded structures are indeed of the lowest energy conformations. However, when the corresponding simulations were performed using the standard (nonpolarizable) AMBER force fields, no stable folded structures were observed for these proteins. Comparison of the simulation results based on a polarizable EPB force field and a nonpolarizable AMBER force field clearly demonstrates the importance of polarization in the folding of stable helical structures.

  1. A novel toolpath force prediction algorithm using CAM volumetric data for optimizing robotic arthroplasty.

    PubMed

    Kianmajd, Babak; Carter, David; Soshi, Masakazu

    2016-10-01

    Robotic total hip arthroplasty is a procedure in which milling operations are performed on the femur to remove material for the insertion of a prosthetic implant. The robot performs the milling operation by following a sequential list of tool motions, also known as a toolpath, generated by a computer-aided manufacturing (CAM) software. The purpose of this paper is to explain a new toolpath force prediction algorithm that predicts cutting forces, which results in improving the quality and safety of surgical systems. With a custom macro developed in the CAM system's native application programming interface, cutting contact patch volume was extracted from CAM simulations. A time domain cutting force model was then developed through the use of a cutting force prediction algorithm. The second portion validated the algorithm by machining a hip canal in simulated bone using a CNC machine. Average cutting forces were measured during machining using a dynamometer and compared to the values predicted from CAM simulation data using the proposed method. The results showed the predicted forces matched the measured forces in both magnitude and overall pattern shape. However, due to inconsistent motion control, the time duration of the forces was slightly distorted. Nevertheless, the algorithm effectively predicted the forces throughout an entire hip canal procedure. This method provides a fast and easy technique for predicting cutting forces during orthopedic milling by utilizing data within a CAM software.

  2. Communication System Simulation Workstation

    DTIC Science & Technology

    1990-01-30

    SIMULATION WORKSTATION Grant # AFOSR-89-0117 Submitted to: DEPARTMENT OF AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH BOLLING AIR FORCE BASE , DC...CORRESPONOENCiA. PAGUETES. CONIIUCE. r ACTUHA. Y CONOCIMIENTO DE EMBAROUES. THIS PURCHASE ORDER [,rccion Cablegralica .1,1 Addrv~s NO MUST APPEAR ON ALL...sub-band decomposition was developed, PKX, based on the modulation of a single prototype filter. This technicde was introduced first by Nassbauner and

  3. Molecular dynamics simulations of highly crowded amino acid solutions: comparisons of eight different force field combinations with experiment and with each other

    PubMed Central

    Andrews, Casey T.

    2013-01-01

    Although it is now commonly accepted that the highly crowded conditions encountered inside biological cells have the potential to significantly alter the thermodynamic properties of biomolecules, it is not known to what extent the thermodynamics of fundamental types of interactions such as salt bridges and hydrophobic interactions are strengthened or weakened by high biomolecular concentrations. As one way of addressing this question we have performed a series of all-atom explicit solvent molecular dynamics (MD) simulations to investigate the effect of increasing solute concentration on the behavior of four types of zwitterionic amino acids in aqueous solution. We have simulated systems containing glycine, valine, phenylalanine or asparagine at concentrations of 50, 100, 200 and 300 mg/ml. Each molecular system has been simulated for 1 μs in order to obtain statistically converged estimates of thermodynamic parameters, and each has been conducted with 8 different force fields and water models; the combined simulation time is 128 μs. The density, viscosity, and dielectric increments of the four amino acids calculated from the simulations have been compared to corresponding experimental measurements. While all of the force fields perform well at reproducing the density increments, discrepancies for the viscosity and dielectric increments raise questions both about the accuracy of the simulation force fields and, in certain cases, the experimental data. We also observe large differences between the various force fields' descriptions of the interaction thermodynamics of salt bridges and, surprisingly, these differences also lead to qualitatively different predictions of their dependences on solute concentration. For the aliphatic interactions of valine sidechains, fewer differences are observed between the force fields, but significant differences are again observed for aromatic interactions of phenylalanine sidechains. Taken together, the results highlight the potential power of using explicit-solvent simulation methods to understand behavior in concentrated systems but also hint at potential difficulties in using these methods to obtain consistent views of behavior in intracellular environments. PMID:24409104

  4. Direct dynamics simulation of the impact phase in heel-toe running.

    PubMed

    Gerritsen, K G; van den Bogert, A J; Nigg, B M

    1995-06-01

    The influence of muscle activation, position and velocities of body segments at touchdown and surface properties on impact forces during heel-toe running was investigated using a direct dynamics simulation technique. The runner was represented by a two-dimensional four- (rigid body) segment musculo-skeletal model. Incorporated into the muscle model were activation dynamics, force-length and force-velocity characteristics of seven major muscle groups of the lower extremities: mm. glutei, hamstrings, m. rectus femoris, mm. vasti, m. gastrocnemius, m. soleus and m. tibialis anterior. The vertical force-deformation characteristics of heel, shoe and ground were modeled by a non-linear visco-elastic element. The maximum of a typical simulated impact force was 1.6 times body weight. The influence of muscle activation was examined by generating muscle stimulation combinations which produce the same (experimentally determined) resultant joint moments at heelstrike. Simulated impact peak forces with these different combinations of muscle stimulation levels varied less than 10%. Without this restriction on initial joint moments, muscle activation had potentially a much larger effect on impact force. Impact peak force was to a great extent influenced by plantar flexion (85 N per degree of change in foot angle) and vertical velocity of the heel (212 N per 0.1 m s-1 change in velocity) at touchdown. Initial knee flexion (68 N per degree of change in leg angle) also played a role in the absorption of impact. Increased surface stiffness resulted in higher impact peak forces (60 N mm-1 decrease in deformation).(ABSTRACT TRUNCATED AT 250 WORDS)

  5. Effects of Mach Numbers on Side Force, Yawing Moment and Surface Pressure

    NASA Astrophysics Data System (ADS)

    Sohail, Muhammad Amjad; Muhammad, Zaka; Husain, Mukkarum; Younis, Muhammad Yamin

    2011-09-01

    In this research, CFD simulations are performed for air vehicle configuration to compute the side force effect and yawing moment coefficients variations at high angle of attack and Mach numbers. As the angle of attack is increased then lift and drag are increased for cylinder body configurations. But when roll angle is given to body then side force component is also appeared on the body which causes lateral forces on the body and yawing moment is also produced. Now due to advancement of CFD methods we are able to calculate these forces and moment even at supersonic and hypersonic speed. In this study modern CFD techniques are used to simulate the hypersonic flow to calculate the side force effects and yawing moment coefficient. Static pressure variations along the circumferential and along the length of the body are also calculated. The pressure coefficient and center of pressure may be accurately predicted and calculated. When roll angle and yaw angle is given to body then these forces becomes very high and cause the instability of the missile body with fin configurations. So it is very demanding and serious problem to accurately predict and simulate these forces for the stability of supersonic vehicles.

  6. Can feedback analysis be used to uncover the physical origin of climate sensitivity and efficacy differences?

    NASA Astrophysics Data System (ADS)

    Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael

    2017-10-01

    Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.

  7. The first effects of fluid inertia on flows in ordered and random arrays of spheres

    NASA Astrophysics Data System (ADS)

    Hill, Reghan J.; Koch, Donald L.; Ladd, Anthony J. C.

    2001-12-01

    Theory and lattice-Boltzmann simulations are used to examine the effects of fluid inertia, at small Reynolds numbers, on flows in simple cubic, face-centred cubic and random arrays of spheres. The drag force on the spheres, and hence the permeability of the arrays, is determined at small but finite Reynolds numbers, at solid volume fractions up to the close-packed limits of the arrays. For small solid volume fraction, the simulations are compared to theory, showing that the first inertial contribution to the drag force, when scaled with the Stokes drag force on a single sphere in an unbounded fluid, is proportional to the square of the Reynolds number. The simulations show that this scaling persists at solid volume fractions up to the close-packed limits of the arrays, and that the first inertial contribution to the drag force relative to the Stokes-flow drag force decreases with increasing solid volume fraction. The temporal evolution of the spatially averaged velocity and the drag force is examined when the fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. Theory for the short- and long-time behaviour is in good agreement with simulations, showing that the unsteady force is dominated by quasi-steady drag and added-mass forces. The short- and long-time added-mass coefficients are obtained from potential-flow and quasi-steady viscous-flow approximations, respectively.

  8. A splitting integration scheme for the SPH simulation of concentrated particle suspensions

    NASA Astrophysics Data System (ADS)

    Bian, Xin; Ellero, Marco

    2014-01-01

    Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.

  9. Effect of stern hull shape on turning circle of ships

    NASA Astrophysics Data System (ADS)

    Jaswar, Maimun, A.; Wahid, M. A.; Priyanto, A.; Zamani, Pauzi, Saman

    2012-06-01

    Many factors such as: stern hull shape, length, draught, trim, propulsion system and external forces affecting the drift angle influence rate of turn and size of turning circle of ships. This paper discusses turning circle characteristics of U and V stern hull shape of Very Large Crude Oil Carrier (VLCC) ships. The ships have same principal dimension such as length, beam, and draught. The turning circle characteristics of the VLCC ships are simulated at 35 degree of rudder angle. In the analysis, firstly, turning circle performance of U-type VLCC ship is simulated. In the simulation, initial ship speed is determined using given power and rpm. Hydrodynamic derivatives coefficients are determined by including effect of fullness of aft run. Using the obtained, speed and hydrodynamic coefficients, force and moment acting on hull, force and moment induced by propeller, force and moment induced by rudder are determined. Finally, ship trajectory, ratio of speed, yaw angle and drift angle are determined. Results of simulation results of the VLCC ship are compared with the experimental one as validation. Using the same method, V-type VLCC is simulated and the simulation results are compared with U-type VLCC ship. Results shows the turning circle of U-type is larger than V-type due to effect stern hul results of simulation are.

  10. The Radiative Forcing Model Intercomparison Project (RFMIP): Experimental protocol for CMIP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pincus, Robert; Forster, Piers M.; Stevens, Bjorn

    The phrasing of the first of three questions motivating CMIP6 – “How does the Earth system respond to forcing?” – suggests that forcing is always well-known, yet the radiative forcing to which this question refers has historically been uncertain in coordinated experiments even as understanding of how best to infer radiative forcing has evolved. The Radiative Forcing Model Intercomparison Project (RFMIP) endorsed by CMIP6 seeks to provide a foundation for answering the question through three related activities: (i) accurate characterization of the effective radiative forcing relative to a near-preindustrial baseline and careful diagnosis of the components of this forcing; (ii) assessment ofmore » the absolute accuracy of clear-sky radiative transfer parameterizations against reference models on the global scales relevant for climate modeling; and (iii) identification of robust model responses to tightly specified aerosol radiative forcing from 1850 to present. Complete characterization of effective radiative forcing can be accomplished with 180 years (Tier 1) of atmosphere-only simulation using a sea-surface temperature and sea ice concentration climatology derived from the host model's preindustrial control simulation. Assessment of parameterization error requires trivial amounts of computation but the development of small amounts of infrastructure: new, spectrally detailed diagnostic output requested as two snapshots at present-day and preindustrial conditions, and results from the model's radiation code applied to specified atmospheric conditions. In conclusion, the search for robust responses to aerosol changes relies on the CMIP6 specification of anthropogenic aerosol properties; models using this specification can contribute to RFMIP with no additional simulation, while those using a full aerosol model are requested to perform at least one and up to four 165-year coupled ocean–atmosphere simulations at Tier 1.« less

  11. The Radiative Forcing Model Intercomparison Project (RFMIP): Experimental protocol for CMIP6

    DOE PAGES

    Pincus, Robert; Forster, Piers M.; Stevens, Bjorn

    2016-09-27

    The phrasing of the first of three questions motivating CMIP6 – “How does the Earth system respond to forcing?” – suggests that forcing is always well-known, yet the radiative forcing to which this question refers has historically been uncertain in coordinated experiments even as understanding of how best to infer radiative forcing has evolved. The Radiative Forcing Model Intercomparison Project (RFMIP) endorsed by CMIP6 seeks to provide a foundation for answering the question through three related activities: (i) accurate characterization of the effective radiative forcing relative to a near-preindustrial baseline and careful diagnosis of the components of this forcing; (ii) assessment ofmore » the absolute accuracy of clear-sky radiative transfer parameterizations against reference models on the global scales relevant for climate modeling; and (iii) identification of robust model responses to tightly specified aerosol radiative forcing from 1850 to present. Complete characterization of effective radiative forcing can be accomplished with 180 years (Tier 1) of atmosphere-only simulation using a sea-surface temperature and sea ice concentration climatology derived from the host model's preindustrial control simulation. Assessment of parameterization error requires trivial amounts of computation but the development of small amounts of infrastructure: new, spectrally detailed diagnostic output requested as two snapshots at present-day and preindustrial conditions, and results from the model's radiation code applied to specified atmospheric conditions. In conclusion, the search for robust responses to aerosol changes relies on the CMIP6 specification of anthropogenic aerosol properties; models using this specification can contribute to RFMIP with no additional simulation, while those using a full aerosol model are requested to perform at least one and up to four 165-year coupled ocean–atmosphere simulations at Tier 1.« less

  12. Combining configurational energies and forces for molecular force field optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  13. Combining configurational energies and forces for molecular force field optimization

    DOE PAGES

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    2017-07-21

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  14. Stability of Granular Packings Jammed under Gravity: Avalanches and Unjamming

    NASA Astrophysics Data System (ADS)

    Merrigan, Carl; Birwa, Sumit; Tewari, Shubha; Chakraborty, Bulbul

    Granular avalanches indicate the sudden destabilization of a jammed state due to a perturbation. We propose that the perturbation needed depends on the entire force network of the jammed configuration. Some networks are stable, while others are fragile, leading to the unpredictability of avalanches. To test this claim, we simulated an ensemble of jammed states in a hopper using LAMMPS. These simulations were motivated by experiments with vibrated hoppers where the unjamming times followed power-law distributions. We compare the force networks for these simulated states with respect to their overall stability. The states are classified by how long they remain stable when subject to continuous vibrations. We characterize the force networks through both their real space geometry and representations in the associated force-tile space, extending this tool to jammed states with body forces. Supported by NSF Grant DMR1409093 and DGE1068620.

  15. Molecular dynamics simulations of polarizable DNA in crystal environment

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Baucom, Jason; Darden, Thomas A.; Sagui, Celeste

    We have investigated the role of the electrostatic description and cell environment in molecular dynamics (MD) simulations of DNA. Multiple unrestrained MD simulations of the DNA duplex d(CCAACGTTGG)2 have been carried out using two different force fields: a traditional description based on atomic point charges and a polarizable force field. For the time scales probed, and given the ?right? distribution of divalent ions, the latter performs better than the nonpolarizable force field. In particular, by imposing the experimental unit cell environment, an initial configuration with ideal B-DNA duplexes in the unit cell acquires sequence-dependent features that very closely resemble the crystallographic ones. Simultaneously, the all-atom root-mean-square coordinates deviation (RMSD) with respect to the crystallographic structure is seen to decay. At later times, the polarizable force field is able to maintain this lower RMSD, while the nonpolarizable force field starts to drift away.

  16. A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Yuska, J. A.

    1972-01-01

    The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.

  17. Toward an Integration of Deep Learning and Neuroscience

    PubMed Central

    Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P.

    2016-01-01

    Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. PMID:27683554

  18. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review

    PubMed Central

    Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.

    2015-01-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561

  19. GEMINI: a computationally-efficient search engine for large gene expression datasets.

    PubMed

    DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick

    2016-02-24

    Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.

  20. Medical data sheet in safe havens - A tri-layer cryptic solution.

    PubMed

    Praveenkumar, Padmapriya; Amirtharajan, Rengarajan; Thenmozhi, K; Balaguru Rayappan, John Bosco

    2015-07-01

    Secured sharing of the diagnostic reports and scan images of patients among doctors with complementary expertise for collaborative treatment will help to provide maximum care through faster and decisive decisions. In this context, a tri-layer cryptic solution has been proposed and implemented on Digital Imaging and Communications in Medicine (DICOM) images to establish a secured communication for effective referrals among peers without compromising the privacy of patients. In this approach, a blend of three cryptic schemes, namely Latin square image cipher (LSIC), discrete Gould transform (DGT) and Rubik׳s encryption, has been adopted. Among them, LSIC provides better substitution, confusion and shuffling of the image blocks; DGT incorporates tamper proofing with authentication; and Rubik renders a permutation of DICOM image pixels. The developed algorithm has been successfully implemented and tested in both the software (MATLAB 7) and hardware Universal Software Radio Peripheral (USRP) environments. Specifically, the encrypted data were tested by transmitting them through an additive white Gaussian noise (AWGN) channel model. Furthermore, the sternness of the implemented algorithm was validated by employing standard metrics such as the unified average changing intensity (UACI), number of pixels change rate (NPCR), correlation values and histograms. The estimated metrics have also been compared with the existing methods and dominate in terms of large key space to defy brute force attack, cropping attack, strong key sensitivity and uniform pixel value distribution on encryption. Copyright © 2015 Elsevier Ltd. All rights reserved.

Top