Sample records for coordinates brute force

  1. Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung

    2016-07-01

    Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.

  2. Computer Program Development Specification for IDAMST Operational Flight Program Application, Software Type B5. Addendum 1.

    DTIC Science & Technology

    1976-07-30

    Interface Requirements 4 3.1.1.1 Interface Block Diagram 4 3.1.1.2 Detailed Interface Definition 7 3.1.1.2.1 Subsystems 7 3.1.1.2.2 Controls & Displays 11 r...116 3.2.3.2 Navigation Brute Force 121 3.2.3.3 Cargo Brute Force 125 3.2.3.4 Sensor Brute Force 129 3.2.3.5 Controls /Displays Brute Force 135 3.2.3.6...STD-T553 Multiplex Data Bus, with the avionic subsystems, flight * control system, the controls /displays, engine sensors, and airframe sensors. 3.1

  3. Analysis of brute-force break-ins of a palmprint authentication system.

    PubMed

    Kong, Adams W K; Zhang, David; Kamel, Mohamed

    2006-10-01

    Biometric authentication systems are widely applied because they offer inherent advantages over classical knowledge-based and token-based personal-identification approaches. This has led to the development of products using palmprints as biometric traits and their use in several real applications. However, as biometric systems are vulnerable to replay, database, and brute-force attacks, such potential attacks must be analyzed before biometric systems are massively deployed in security systems. This correspondence proposes a projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system. To validate the proposed model, we have conducted a simulation. Its results demonstrate that the proposed model can accurately estimate the probability. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks.

  4. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  5. Near-Neighbor Algorithms for Processing Bearing Data

    DTIC Science & Technology

    1989-05-10

    neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N

  6. Vulnerability Analysis of the MAVLink Protocol for Command and Control of Unmanned Aircraft

    DTIC Science & Technology

    2013-03-27

    the cheapest computers currently on the market (the $35 Raspberry Pi [New13, Upt13]) to distribute the workload, a determined attacker would incur a...cCost of Brute-Force) for 6,318 Raspberry Pi systems (x) at $82 per 3DR-enabled Raspberry Pi (RPCost of RasPi) [3DR13, New13] to brute-force all 3,790,800...NIST, 2004. [New13] Newark. Order the Raspberry Pi , November 2013. last accessed: 19 Febru- ary 2014. URL: http://www.newark.com/jsp/search

  7. Evaluation of simulation alternatives for the brute-force ray-tracing approach used in backlight design

    NASA Astrophysics Data System (ADS)

    Desnijder, Karel; Hanselaer, Peter; Meuret, Youri

    2016-04-01

    A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.

  8. The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images

    NASA Astrophysics Data System (ADS)

    Fedotova, S.; Seredin, O.; Kushnir, O.

    2017-05-01

    In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.

  9. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  10. How to Run FAST Simulations.

    PubMed

    Zimmerman, M I; Bowman, G R

    2016-01-01

    Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.

  11. Permeation profiles of Antibiotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez Bautista, Cesar Augusto; Gnanakaran, Sandrasegaram

    Presentation describes motivation: Combating bacterial inherent resistance; Drug development mainly uses brute force rather than rational design; Current experimental approaches lack molecular detail.

  12. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  13. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  14. Shipboard Fluid System Diagnostics Using Non-Intrusive Load Monitoring

    DTIC Science & Technology

    2007-06-01

    brute.s(3).data; tDPP = brute.s(3).time; FL = brute.s(4).data; tFL = brute.s(4).time; RM = brute.s(5).data; tRM = brute.s(5).time; DPF = brute.s...s’, max(tP1), files(n).name)); ylabel(’Power’); axis tight grid on; subplot(4,1,2); plot( tDPP , DPP, tDPF, DPF) ylabel(’DP Gauges’); axis

  15. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  16. Grover Search and the No-Signaling Principle

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bouland, Adam; Jordan, Stephen P.

    2016-09-01

    Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.

  17. Reconstructing the evolution of first-row transition metal minerals by GeoDeepDive

    NASA Astrophysics Data System (ADS)

    Liu, C.; Peters, S. E.; Ross, I.; Golden, J. J.; Downs, R. T.; Hazen, R. M.

    2016-12-01

    Terrestrial mineralogy evolves as a consequence of a range of physical, chemical, and biological processes [1]. Evolution of the first-row transition metal minerals could mirror the evolution of Earth's oxidation state and life, since these elements mostly are redox-sensitive and/or play critical roles in biology. The fundamental building blocks to reconstruct mineral evolution are the mineral species, locality, and age data, which are typically dispersed in sentences in scientific and technical publications. These data can be tracked down in a brute-force way, i.e., human retrieval, reading, and recording all relevant literature. Alternatively, they can be extracted automatically by GeoDeepDive. In GeoDeepDive, scientific and technical articles from publishers, including Elsevier, Wiley, USGS, SEPM, GSA and Canada Science Publishing, have been parsed into a Javascript database with NLP tags. Sentences containing data of mineral names, locations, and ages can be recognized and extracted by user-developed applications. In a preliminary search for cobalt mineral ages, we successfully extracted 678 citations with >1000 mentions of cobalt minerals, their locations, and ages. The extracted results are in agreement with brute-force search results. What is more, GeoDeepDive provides 40 additional data points that were not recovered by the brute-force approach. The extracted mineral locality-age data suggest that the evolution of Co minerals is controlled by global supercontinent cycles, i.e., more Co minerals form during episodes of supercontinent assembly. Mineral evolution of other first-row transition elements is being investigated through GeoDeepDive. References: [1] Hazen et al. (2008) Mineral evolution. American Mineralogist, 93, 1693-1720

  18. Finding All Solutions to the Magic Hexagram

    ERIC Educational Resources Information Center

    Holland, Jason; Karabegov, Alexander

    2008-01-01

    In this article, a systematic approach is given for solving a magic star puzzle that usually is accomplished by trial and error or "brute force." A connection is made to the symmetries of a cube, thus the name Magic Hexahedron.

  19. Probabilistic sampling of protein conformations: new hope for brute force?

    PubMed

    Feldman, Howard J; Hogue, Christopher W V

    2002-01-01

    Protein structure prediction from sequence alone by "brute force" random methods is a computationally expensive problem. Estimates have suggested that it could take all the computers in the world longer than the age of the universe to compute the structure of a single 200-residue protein. Here we investigate the use of a faster version of our FOLDTRAJ probabilistic all-atom protein-structure-sampling algorithm. We have improved the method so that it is now over twenty times faster than originally reported, and capable of rapidly sampling conformational space without lattices. It uses geometrical constraints and a Leonard-Jones type potential for self-avoidance. We have also implemented a novel method to add secondary structure-prediction information to make protein-like amounts of secondary structure in sampled structures. In a set of 100,000 probabilistic conformers of 1VII, 1ENH, and 1PMC generated, the structures with smallest Calpha RMSD from native are 3.95, 5.12, and 5.95A, respectively. Expanding this test to a set of 17 distinct protein folds, we find that all-helical structures are "hit" by brute force more frequently than beta or mixed structures. For small helical proteins or very small non-helical ones, this approach should have a "hit" close enough to detect with a good scoring function in a pool of several million conformers. By fitting the distribution of RMSDs from the native state of each of the 17 sets of conformers to the extreme value distribution, we are able to estimate the size of conformational space for each. With a 0.5A RMSD cutoff, the number of conformers is roughly 2N where N is the number of residues in the protein. This is smaller than previous estimates, indicating an average of only two possible conformations per residue when sterics are accounted for. Our method reduces the effective number of conformations available at each residue by probabilistic bias, without requiring any particular discretization of residue conformational space, and is the fastest method of its kind. With computer speeds doubling every 18 months and parallel and distributed computing becoming more practical, the brute force approach to protein structure prediction may yet have some hope in the near future. Copyright 2001 Wiley-Liss, Inc.

  20. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  1. Brute-Force Approach for Mass Spectrometry-Based Variant Peptide Identification in Proteogenomics without Personalized Genomic Data

    NASA Astrophysics Data System (ADS)

    Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.

    2018-02-01

    In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.

  2. Image matching algorithms for breech face marks and firing pins in a database of spent cartridge cases of firearms.

    PubMed

    Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F

    2001-06-01

    On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.

  3. Studies on a Spatialized Audio Interface for Sonar

    DTIC Science & Technology

    2011-10-03

    addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a

  4. Examining single-source secondary impacts estimated from brute-force, decoupled direct method, and advanced plume treatment approaches

    EPA Science Inventory

    In regulatory assessments, there is a need for reliable estimates of the impacts of precursor emissions from individual sources on secondary PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microns) and ozone. Three potential methods for estimating th...

  5. The End of Flat Earth Economics & the Transition to Renewable Resource Societies.

    ERIC Educational Resources Information Center

    Henderson, Hazel

    1978-01-01

    A post-industrial revolution is predicted for the future with an accompanying shift of focus from simple, brute force technolgies, based on cheap, accessible resources and energy, to a second generation of more subtle, refined technologies grounded in a much deeper understanding of biological and ecological realities. (Author/BB)

  6. Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology

    PubMed Central

    Palaparthi, Anil; Riede, Tobias

    2017-01-01

    Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563

  7. Nuclear spin imaging with hyperpolarized nuclei created by brute force method

    NASA Astrophysics Data System (ADS)

    Tanaka, Masayoshi; Kunimatsu, Takayuki; Fujiwara, Mamoru; Kohri, Hideki; Ohta, Takeshi; Utsuro, Masahiko; Yosoi, Masaru; Ono, Satoshi; Fukuda, Kohji; Takamatsu, Kunihiko; Ueda, Kunihiro; Didelez, Jean-P.; Prossati, Giorgio; de Waard, Arlette

    2011-05-01

    We have been developing a polarized HD target for particle physics at the SPring-8 under the leadership of the RCNP, Osaka University for the past 5 years. Nuclear polarizaton is created by means of the brute force method which uses a high magnetic field (~17 T) and a low temperature (~ 10 mK). As one of the promising applications of the brute force method to life sciences we started a new project, "NSI" (Nuclear Spin Imaging), where hyperpolarized nuclei are used for the MRI (Magnetic Resonance Imaging). The candidate nuclei with spin ½hslash are 3He, 13C, 15N, 19F, 29Si, and 31P, which are important elements for the composition of the biomolecules. Since the NMR signals from these isotopes are enhanced by orders of magnitudes, the spacial resolution in the imaging would be much more improved compared to the practical MRI used so far. Another advantage of hyperpolarized MRI is that the MRI is basically free from the radiation, while the problems of radiation exposure caused by the X-ray CT or PET (Positron Emission Tomography) cannot be neglected. In fact, the risk of cancer for Japanese due to the radiation exposure through these diagnoses is exceptionally high among the advanced countries. As the first step of the NSI project, we are developing a system to produce hyperpolarized 3He gas for the diagnosis of serious lung diseases, for example, COPD (Chronic Obstructive Pulmonary Disease). The system employs the same 3He/4He dilution refrigerator and superconducting solenoidal coil as those used for the polarized HD target with some modification allowing the 3He Pomeranchuk cooling and the following rapid melting of the polarized solid 3He to avoid the depolarization. In this report, the present and future steps of our project will be outlined with some latest experimental results.

  8. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  9. Social Epistemology, the Reason of "Reason" and the Curriculum Studies

    ERIC Educational Resources Information Center

    Popkewitz, Thomas S.

    2014-01-01

    Not-with-standing the current topoi of the Knowledge Society, a particular "fact" of modernity is that power is exercised less through brute force and more through systems of reason that order and classify what is known and acted on. This article explored the system of reason that orders and classifies what is talked about, thought and…

  10. Managing conflicts in systems development.

    PubMed

    Barnett, E

    1997-05-01

    Conflict in systems development is nothing new. It can vary in intensity, but there will always be two possible outcomes--one constructive and the other destructive. The common approach to conflict management is to draw the battle lines and apply brute force. However, there are other ways to deal with conflict that are more effective and more people oriented.

  11. Code White: A Signed Code Protection Mechanism for Smartphones

    DTIC Science & Technology

    2010-09-01

    analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2

  12. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation.

    PubMed

    Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D

    2018-02-22

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).

  13. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation

    PubMed Central

    Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.

    2018-01-01

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448

  14. The United States and India in the Post-Soviet World: Proceedings of the Indo-U.S. Strategic Symposium

    DTIC Science & Technology

    1993-04-23

    mechanisms that take into account this new reality. TERRORISM Lastly is the question of terrorism. There can be no two opinions on this most heinous crime ...the notion of an empire "essentially based on force" that had to be maintained, if necessary, "by brute force" see Suhash Chakravarty, The Raj Syndrome ...over power to the National League for Democracy (NLD) led by Aung San Suu Xyi , the daughter of Burma’s independence leader, Aung San. Since then, the

  15. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  16. Strategic Studies Quarterly. Volume 9, Number 2. Summer 2015

    DTIC Science & Technology

    2015-01-01

    disrupting financial markets. Among other indicators, China’s already deployed and future Type 094 Jin -ciass nuclear ballistic missile submarines (SSBN...on agility instead of brute force re- inforces traditional Chinese military thinking. Since Sun Tzu, the acme of skill has been winning without... mechanical (both political and technical) nature of digital developments. Given this, the nature of system constraints under a dif- ferent future

  17. Portable Language-Independent Adaptive Translation from OCR. Phase 1

    DTIC Science & Technology

    2009-04-01

    including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation

  18. CAD/CAM Helps Build Better Bots: High-Tech Design and Manufacture Draws Engineering-Oriented Students

    ERIC Educational Resources Information Center

    Van Name, Barry

    2012-01-01

    There is a battlefield where no quarter is given, no mercy shown, but not a single drop of blood is spilled. It is an arena that witnesses the bringing together of high-tech design and manufacture with the outpouring of brute force, under the remotely accessed command of some of today's brightest students. This is the world of battling robots, or…

  19. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  20. B* Probability Based Search

    DTIC Science & Technology

    1994-06-27

    success . The key ideas behind the algorithm are: 1. Stopping when one alternative is clearly better than all the others, and 2. Focusing the search on...search algorithm has been implemented on the chess machine Hitech . En route we have developed effective techniques for: "* Dealing with independence of...report describes the implementation, and the results of tests including games played against brute- force programs. The data indicate that B* Hitech is a

  1. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  2. Free Energy Computations by Minimization of Kullback-Leibler Divergence: An Efficient Adaptive Biasing Potential Method for Sparse Representations

    DTIC Science & Technology

    2011-10-14

    landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on

  3. From Coercion to Brute Force: Exploring the Evolution and Consequences of the Responsibility to Protect

    DTIC Science & Technology

    2016-05-26

    to Protect Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER MAJ Ashley E. Welte Se. TASK NUMBER Sf. WORK UNIT NUMBER...III, COL, IN Accepted this 26th day of May 2016 by: ___________________________________, Director, Graduate Degree Programs Robert F. Baumann, PhD The...copyright permission has been obtained for the inclusion of pictures, maps, graphics, and any other works incorporated into this manuscript. A work of the

  4. Quaternion normalization in additive EKF for spacecraft attitude determination. [Extended Kalman Filters

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter to spacecraft attitude determination, which is based on vector measurements. Three new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstate the performance of all four schemes.

  5. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    NASA Astrophysics Data System (ADS)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  6. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  7. Brute force absorption contrast microtomography

    NASA Astrophysics Data System (ADS)

    Davis, Graham R.; Mills, David

    2014-09-01

    In laboratory X-ray microtomography (XMT) systems, the signal-to-noise ratio (SNR) is typically determined by the X-ray exposure due to the low flux associated with microfocus X-ray tubes. As the exposure time is increased, the SNR improves up to a point where other sources of variability dominate, such as differences in the sensitivities of adjacent X-ray detector elements. Linear time-delay integration (TDI) readout averages out detector sensitivities on the critical horizontal direction and equiangular TDI also averages out the X-ray field. This allows the SNR to be increased further with increasing exposure. This has been used in dentistry to great effect, allowing subtle variations in dentine mineralisation to be visualised in 3 dimensions. It has also been used to detect ink in ancient parchments that are too damaged to physically unroll. If sufficient contrast between the ink and parchment exists, it is possible to virtually unroll the tomographic image of the scroll in order that the text can be read. Following on from this work, a feasibility test was carried out to determine if it might be possible to recover images from decaying film reels. A successful attempt was made to re-create a short film sequence from a rolled length of 16mm film using XMT. However, the "brute force" method of scaling this up to allow an entire film reel to be imaged presents a significant challenge.

  8. Nonconservative dynamics in long atomic wires

    NASA Astrophysics Data System (ADS)

    Cunningham, Brian; Todorov, Tchavdar N.; Dundas, Daniel

    2014-09-01

    The effect of nonconservative current-induced forces on the ions in a defect-free metallic nanowire is investigated using both steady-state calculations and dynamical simulations. Nonconservative forces were found to have a major influence on the ion dynamics in these systems, but their role in increasing the kinetic energy of the ions decreases with increasing system length. The results illustrate the importance of nonconservative effects in short nanowires and the scaling of these effects with system size. The dependence on bias and ion mass can be understood with the help of a simple pen and paper model. This material highlights the benefit of simple preliminary steady-state calculations in anticipating aspects of brute-force dynamical simulations, and provides rule of thumb criteria for the design of stable quantum wires.

  9. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  10. Making Classical Ground State Spin Computing Fault-Tolerant

    DTIC Science & Technology

    2010-06-24

    approaches to perebor (brute-force searches) algorithms,” IEEE Annals of the History of Computing, 6, 384–400 (1984). [24] D. Bacon and S . T. Flammia ...Adiabatic gate teleportation,” Phys. Rev. Lett., 103, 120504 (2009). [25] D. Bacon and S . T. Flammia , “Adiabatic cluster state quantum computing...v1 [ co nd -m at . s ta t- m ec h] 2 2 Ju n 20 10 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the

  11. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  12. TEAM: efficient two-locus epistasis tests in human genome-wide association study.

    PubMed

    Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei

    2010-06-15

    As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.

  13. Human problem solving performance in a fault diagnosis task

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1978-01-01

    It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.

  14. The Trailwatcher: A Collection of Colonel Mike Malone’s Writings

    DTIC Science & Technology

    1982-06-21

    washtub-sized turtle is boat Stand reaches but more brute force. the six eases its noose ’s head and neck. As the noose , the , short on... nebulous term for who would that?" I saw a functions: was constrain them to work on what to be down here won’t like range cards that any told me...the process never ceases. me on now our factor: mot ion. What motivates a of books that have been written on motivation handle on this nebulous term

  15. KSC00pp1574

    NASA Image and Video Library

    2000-09-21

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  16. KSC-00pp1574

    NASA Image and Video Library

    2000-09-21

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  17. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  18. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less

  19. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  20. Security enhanced BioEncoding for protecting iris codes

    NASA Astrophysics Data System (ADS)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  1. Password Cracking Using Sony Playstations

    NASA Astrophysics Data System (ADS)

    Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet

    Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.

  2. DynaGuard: Armoring Canary-Based Protections against Brute-Force Attacks

    DTIC Science & Technology

    2015-12-11

    public domain. Non-exclusive copying or redistribution is...sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My SQ L Sl ow do w n (n...k 456 .hm me r 458 .sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My

  3. The new Mobile Command Center at KSC is important addition to emergency preparedness

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed '''The Brute,''' it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.

  4. Shortest path problem on a grid network with unordered intermediate points

    NASA Astrophysics Data System (ADS)

    Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen

    2017-10-01

    We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.

  5. Birefringence study on 3-C/2-D: Barinas Basin (Venezuela)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donati, M.S.; Brown, R.J.

    1995-12-31

    P-SV data from the Barinas Basin (Venezuela) was processed with the goal of estimating the birefringence effect caused by an anisotropic layer. The target zone is a fractured carbonate reservoir at 3,000 m located in southwestern Venezuela. The time-lag between fast and slow S-waves (S-waves splitting), and the angle between line azimuth and orientation of the natural coordinates are determined using the Harrison rotation method based upon a modeling of the crosscorrelation function between rotated radial and transverse field components. Due to the small statics observed on the brute stacks of radial and transverse components, the time-shift could be associatedmore » with splitting effects due to the carbonate reservoir in this area.« less

  6. "The Et Tu Brute Complex" Compulsive Self Betrayal

    ERIC Educational Resources Information Center

    Antus, Robert Lawrence

    2006-01-01

    In this article, the author discusses "The Et Tu Brute Complex." More specifically, this phenomenon occurs when a person, instead of supporting and befriending himself, orally condemns himself in front of other people and becomes his own worst enemy. This is a form of compulsive self-hatred. Most often, the victim of this complex is unaware of the…

  7. Virtual ellipsometry on layered micro-facet surfaces.

    PubMed

    Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas

    2017-09-18

    Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.

  8. The new Mobile Command Center at KSC is important addition to emergency preparedness

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed '''The Brute,''' serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.

  9. A nonperturbative approximation for the moderate Reynolds number Navier–Stokes equations

    PubMed Central

    Roper, Marcus; Brenner, Michael P.

    2009-01-01

    The nonlinearity of the Navier–Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier–Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes. PMID:19211800

  10. A nonperturbative approximation for the moderate Reynolds number Navier-Stokes equations.

    PubMed

    Roper, Marcus; Brenner, Michael P

    2009-03-03

    The nonlinearity of the Navier-Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier-Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes.

  11. KSC-00pp1572

    NASA Image and Video Library

    2000-09-21

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  12. KSC00pp1572

    NASA Image and Video Library

    2000-09-21

    Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station

  13. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  14. Single realization stochastic FDTD for weak scattering waves in biological random media.

    PubMed

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2013-02-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.

  15. Single realization stochastic FDTD for weak scattering waves in biological random media

    PubMed Central

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2015-01-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153

  16. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  17. Develop a solution for protecting and securing enterprise networks from malicious attacks

    NASA Astrophysics Data System (ADS)

    Kamuru, Harshitha; Nijim, Mais

    2014-05-01

    In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be gathered. This information gathering typically takes the form of various types of scans to determine available hosts, networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis. We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new solution for securing enterprise networks will be developed here.

  18. Arm retraction dynamics of entangled star polymers: A forward flux sampling method study

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Likhtman, Alexei E.; Wang, Zuowei

    2017-07-01

    The study of dynamics and rheology of well-entangled branched polymers remains a challenge for computer simulations due to the exponentially growing terminal relaxation times of these polymers with increasing molecular weights. We present an efficient simulation algorithm for studying the arm retraction dynamics of entangled star polymers by combining the coarse-grained slip-spring (SS) model with the forward flux sampling (FFS) method. This algorithm is first applied to simulate symmetric star polymers in the absence of constraint release (CR). The reaction coordinate for the FFS method is determined by finding good agreement of the simulation results on the terminal relaxation times of mildly entangled stars with those obtained from direct shooting SS model simulations with the relative difference between them less than 5%. The FFS simulations are then carried out for strongly entangled stars with arm lengths up to 16 entanglements that are far beyond the accessibility of brute force simulations in the non-CR condition. Apart from the terminal relaxation times, the same method can also be applied to generate the relaxation spectra of all entanglements along the arms which are desired for the development of quantitative theories of entangled branched polymers. Furthermore, we propose a numerical route to construct the experimentally measurable relaxation correlation functions by effectively linking the data stored at each interface during the FFS runs. The obtained star arm end-to-end vector relaxation functions Φ (t ) and the stress relaxation function G(t) are found to be in reasonably good agreement with standard SS simulation results in the terminal regime. Finally, we demonstrate that this simulation method can be conveniently extended to study the arm-retraction problem in entangled star polymer melts with CR by modifying the definition of the reaction coordinate, while the computational efficiency will depend on the particular slip-spring or slip-link model employed.

  19. Selectivity trend of gas separation through nanoporous graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hongjun; Chen, Zhongfang; Dai, Sheng

    2014-01-29

    We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less

  20. A Newton-Krylov solver for fast spin-up of online ocean tracers

    NASA Astrophysics Data System (ADS)

    Lindsay, Keith

    2017-01-01

    We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.

  1. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  2. Are Individuals Luck Egalitarians? – An Experiment on the Influence of Brute and Option Luck on Social Preferences

    PubMed Central

    Tinghög, Gustav; Andersson, David; Västfjäll, Daniel

    2017-01-01

    According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals’ deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals’ fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals (n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent (p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth. PMID:28424641

  3. Are Individuals Luck Egalitarians? - An Experiment on the Influence of Brute and Option Luck on Social Preferences.

    PubMed

    Tinghög, Gustav; Andersson, David; Västfjäll, Daniel

    2017-01-01

    According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals' deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals' fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals ( n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent ( p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth.

  4. I Hear You Eat and Speak: Automatic Recognition of Eating Condition and Food Type, Use-Cases, and Impact on ASR Performance

    PubMed Central

    Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn

    2016-01-01

    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486

  5. An N-body Integrator for Planetary Rings

    NASA Astrophysics Data System (ADS)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  6. Quaternion normalization in additive EKF for spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.

    1991-01-01

    This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.

  7. Morphodynamic data assimilation used to understand changing coasts

    USGS Publications Warehouse

    Plant, Nathaniel G.; Long, Joseph W.

    2015-01-01

    Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.

  8. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    NASA Astrophysics Data System (ADS)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  9. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  10. Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients

    NASA Astrophysics Data System (ADS)

    Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.

    2017-09-01

    We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.

  11. A Formal Algorithm for Routing Traces on a Printed Circuit Board

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    1996-01-01

    This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.

  12. Connection forces in deformable multibody dynamics

    NASA Technical Reports Server (NTRS)

    Shabana, A. A.; Chang, C. W.

    1989-01-01

    In the dynamic formulation of holonomic and nonholonomic systems based on D'Alembert-Lagrange equation, the forces of constraints are maintained in the dynamic equations by introducing auxiliary variables, called Lagrange multipliers. This approach introduces a set of generalized reaction forces associated with the system generalized coordinates. Different sets of variables can be used as generalized coordinates and accordingly, the generalized reactions associated with these generalized coordinates may not be the actual reaction forces at the joints. In rigid body dynamics, the generalized reaction forces and the actual reaction forces at the joints represent equipollent systems of forces since they produce the same total forces and moments at and about any point on the rigid body. This is not, however, the case in deformable body analyses wherein the generalized reaction forces depend on the system generalized reference and elastic coordinates. In this paper, a method for determining the actual reaction forces at the joints from the generalized reaction forces in deformable multibody systems is presented.

  13. From "brute" to "thug:" the demonization and criminalization of unarmed Black male victims in America.

    PubMed

    Smiley, CalvinJohn; Fakunle, David

    The synonymy of Blackness with criminality is not a new phenomenon in America. Documented historical accounts have shown how myths, stereotypes, and racist ideologies led to discriminatory policies and court rulings that fueled racial violence in a post-Reconstruction era and has culminated in the exponential increase of Black male incarceration today. Misconceptions and prejudices manufactured and disseminated through various channels such as the media included references to a " brute " image of Black males. In the 21 st century, this negative imagery of Black males has frequently utilized the negative connotation of the terminology " thug ." In recent years, law enforcement agencies have unreasonably used deadly force on Black males allegedly considered to be "suspects" or "persons of interest." The exploitation of these often-targeted victims' criminal records, physical appearances, or misperceived attributes has been used to justify their unlawful deaths. Despite the connection between disproportionate criminality and Black masculinity, little research has been done on how unarmed Black male victims, particularly but not exclusively at the hands of law enforcement, have been posthumously criminalized. This paper investigates the historical criminalization of Black males and its connection to contemporary unarmed victims of law enforcement. Action research methodology in the data collection process is utilized to interpret how Black male victims are portrayed by traditional mass media, particularly through the use of language, in ways that marginalize and de-victimize these individuals. This study also aims to elucidate a contemporary understanding of race relations, racism, and the plight of the Black male in a 21-century "post-racial" America.

  14. Homogeneous nucleation in supersaturated vapors of methane, ethane, and carbon dioxide predicted by brute force molecular dynamics.

    PubMed

    Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans

    2008-04-28

    Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.

  15. Step to improve neural cryptography against flipping attacks.

    PubMed

    Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold

    2004-12-01

    Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.

  16. Vector Potential Generation for Numerical Relativity Simulations

    NASA Astrophysics Data System (ADS)

    Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian

    2017-01-01

    Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436

  17. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    PubMed

    Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M

    2013-01-01

    This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  18. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  19. An efficient and numerically stable procedure for generating sextic force fields in normal mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz

    In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less

  20. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  1. Hierarchical Material Properties in Finite Element Analysis: The Oilfield Infrastructure Problem.

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Wilson, G. A.

    2017-12-01

    Geophysical simulation of low-frequency electromagnetic signals within built environments such as urban centers and industrial landscapes facilities is a challenging computational problem because strong conductors (e.g., pipes, fences, rail lines, rebar, etc.) are not only highly conductive and/or magnetic relative to the surrounding geology, but they are very small in one or more of their physical length coordinates. Realistic modeling of such structures as idealized conductors has long been the standard approach; however this strategy carries with it computational burdens such as cumbersome implementation of internal boundary conditions, and limited flexibility for accommodating realistic geometries. Another standard approach is "brute force" discretization (often coupled with an equivalent medium model) whereby 100's of millions of voxels are used to represent these strong conductors, but at the cost of extreme computation times (and mesh design) for a simulation result when possible. To minimize these burdens, a new finite element scheme (Weiss, Geophysics, 2017) has been developed in which the material properties reside on a hierarchy of geometric simplicies (i.e., edges, facets and volumes) within an unstructured tetrahedral mesh. This allows thin sheet—like structures, such as subsurface fractures, to be economically represented by a connected set of triangular facets, for example, that freely conform to arbitrary "real world" geometries. The same holds thin pipe/wire-like structures, such as casings or pipelines. The hierarchical finite element scheme has been applied to problems in electro- and magnetostatics for oilfield problems where the elevated, but finite, conductivity and permeability of the steel-cased oil wells must be properly accounted for, yielding results that are otherwise unobtainable, with run times as low as a few 10s of seconds. Extension of the hierarchical finite element concept to broadband electromagnetics is presently underway, as are its implications for geophysical inversion.

  2. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE PAGES

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...

    2018-06-08

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  3. Low-field thermal mixing in [1-(13)C] pyruvic acid for brute-force hyperpolarization.

    PubMed

    Peat, David T; Hirsch, Matthew L; Gadian, David G; Horsewill, Anthony J; Owers-Bradley, John R; Kempf, James G

    2016-07-28

    We detail the process of low-field thermal mixing (LFTM) between (1)H and (13)C nuclei in neat [1-(13)C] pyruvic acid at cryogenic temperatures (4-15 K). Using fast-field-cycling NMR, (1)H nuclei in the molecule were polarized at modest high field (2 T) and then equilibrated with (13)C nuclei by fast cycling (∼300-400 ms) to a low field (0-300 G) that activates thermal mixing. The (13)C NMR spectrum was recorded after fast cycling back to 2 T. The (13)C signal derives from (1)H polarization via LFTM, in which the polarized ('cold') proton bath contacts the unpolarised ('hot') (13)C bath at a field so low that Zeeman and dipolar interactions are similar-sized and fluctuations in the latter drive (1)H-(13)C equilibration. By varying mixing time (tmix) and field (Bmix), we determined field-dependent rates of polarization transfer (1/τ) and decay (1/T1m) during mixing. This defines conditions for effective mixing, as utilized in 'brute-force' hyperpolarization of low-γ nuclei like (13)C using Boltzmann polarization from nearby protons. For neat pyruvic acid, near-optimum mixing occurs for tmix∼ 100-300 ms and Bmix∼ 30-60 G. Three forms of frozen neat pyruvic acid were tested: two glassy samples, (one well-deoxygenated, the other O2-exposed) and one sample pre-treated by annealing (also well-deoxygenated). Both annealing and the presence of O2 are known to dramatically alter high-field longitudinal relaxation (T1) of (1)H and (13)C (up to 10(2)-10(3)-fold effects). Here, we found smaller, but still critical factors of ∼(2-5)× on both τ and T1m. Annealed, well-deoxygenated samples exhibit the longest time constants, e.g., τ∼ 30-70 ms and T1m∼ 1-20 s, each growing vs. Bmix. Mixing 'turns off' for Bmix > ∼100 G. That T1m≫τ is consistent with earlier success with polarization transfer from (1)H to (13)C by LFTM.

  4. SIMBAD : a sequence-independent molecular-replacement pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.

    The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less

  5. Faint Debris Detection by Particle Based Track-Before-Detect Method

    NASA Astrophysics Data System (ADS)

    Uetsuhara, M.; Ikoma, N.

    2014-09-01

    This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.

  6. Calculating Free Energies Using Average Force

    NASA Technical Reports Server (NTRS)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  7. On the nature of unintentional action: a study of force/moment drifts during multifinger tasks.

    PubMed

    Parsa, Behnoosh; O'Shea, Daniel J; Zatsiorsky, Vladimir M; Latash, Mark L

    2016-08-01

    We explored the origins of unintentional changes in performance during accurate force production in isometric conditions seen after turning visual feedback off. The idea of control with referent spatial coordinates suggests that these phenomena could result from drifts of the referent coordinate for the effector. Subjects performed accurate force/moment production tasks by pressing with the fingers of a hand on force sensors. Turning the visual feedback off resulted in slow drifts of both total force and total moment to lower magnitudes of these variables; these drifts were more pronounced in the right hand of the right-handed subjects. Drifts in individual finger forces could be in different direction; in particular, fingers that produced moments of force against the required total moment showed an increase in their forces. The force/moment drift was associated with a drop in the index of synergy stabilizing performance under visual feedback. The drifts in directions that changed performance (non-motor equivalent) and in directions that did not (motor equivalent) were of about the same magnitude. The results suggest that control with referent coordinates is associated with drifts of those referent coordinates toward the corresponding actual coordinates of the hand, a reflection of the natural tendency of physical systems to move toward a minimum of potential energy. The interaction between drifts of the hand referent coordinate and referent orientation leads to counterdirectional drifts in individual finger forces. The results also demonstrate that the sensory information used to create multifinger synergies is necessary for their presence over the task duration. Copyright © 2016 the American Physiological Society.

  8. The influence of asymmetric force requirements on a multi-frequency bimanual coordination task.

    PubMed

    Kennedy, Deanna M; Rhee, Joohyun; Jimenez, Judith; Shea, Charles H

    2017-01-01

    An experiment was designed to determine the impact of the force requirements on the production of bimanual 1:2 coordination patterns requiring the same (symmetric) or different (asymmetric) forces when Lissajous displays and goal templates are provided. The Lissajous displays have been shown to minimize the influence of attentional and perceptual constraints allowing constraints related to neural crosstalk to be more clearly observed. Participants (N=20) were randomly assigned to a force condition in which the left or right limb was required to produce more force than the contralateral limb. In each condition participants were required to rhythmically coordinate the pattern of isometric forces in a 1:2 coordination pattern. Participant performed 13 practice trials and 1 test trial per force level. The results indicated that participants were able to effectively coordinate the 1:2 multi-frequency goal patterns under both symmetric and asymmetric force requirements. However, consistent distortions in the force and force velocity time series were observed for one limb that appeared to be associated with the production of force in the contralateral limb. Distortions in the force produced by the left limb occurred regardless of the force requirements of the task (symmetric, asymmetric) or whether the left or right limb had to produce more force than the contralateral limb. However, distinct distortions in the right limb occurred only when the left limb was required to produce 5 times more force than the right limb. These results are consistent with the notion that neural crosstalk can influence both limbs, but may manifest differently for each limb depending on the force requirements of the task. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Force-independent distribution of correlated neural inputs to hand muscles during three-digit grasping.

    PubMed

    Poston, Brach; Danna-Dos Santos, Alessander; Jesunathadas, Mark; Hamm, Thomas M; Santello, Marco

    2010-08-01

    The ability to modulate digit forces during grasping relies on the coordination of multiple hand muscles. Because many muscles innervate each digit, the CNS can potentially choose from a large number of muscle coordination patterns to generate a given digit force. Studies of single-digit force production tasks have revealed that the electromyographic (EMG) activity scales uniformly across all muscles as a function of digit force. However, the extent to which this finding applies to the coordination of forces across multiple digits is unknown. We addressed this question by asking subjects (n = 8) to exert isometric forces using a three-digit grip (thumb, index, and middle fingers) that allowed for the quantification of hand muscle coordination within and across digits as a function of grasp force (5, 20, 40, 60, and 80% maximal voluntary force). We recorded EMG from 12 muscles (6 extrinsic and 6 intrinsic) of the three digits. Hand muscle coordination patterns were quantified in the amplitude and frequency domains (EMG-EMG coherence). EMG amplitude scaled uniformly across all hand muscles as a function of grasp force (muscle x force interaction: P = 0.997; cosines of angle between muscle activation pattern vector pairs: 0.897-0.997). Similarly, EMG-EMG coherence was not significantly affected by force (P = 0.324). However, coherence was stronger across extrinsic than that across intrinsic muscle pairs (P = 0.0039). These findings indicate that the distribution of neural drive to multiple hand muscles is force independent and may reflect the anatomical properties or functional roles of hand muscle groups.

  10. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  11. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  12. Influence of temperature fluctuations on infrared limb radiance: a new simulation code

    NASA Astrophysics Data System (ADS)

    Rialland, Valérie; Chervet, Patrick

    2006-08-01

    Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.

  13. The collision forces and lower-extremity inter-joint coordination during running.

    PubMed

    Wang, Li-I; Gu, Chin-Yi; Wang, I-Lin; Siao, Sheng-Wun; Chen, Szu-Ting

    2018-06-01

    The purpose of this study was to compare the lower extremity inter-joint coordination of different collision forces runners during running braking phase. A dynamical system approach was used to analyse the inter-joint coordination parameters. Data were collected with six infra-red cameras and two force plates. According to the impact peak of the vertical ground reaction force, twenty habitually rearfoot-strike runners were categorised into three groups: high collision forces runners (HF group, n = 8), medium collision forces runners (MF group, n = 5), and low collision forces runners (LF group, n = 7). There were no significant differences among the three groups in the ankle and knee joint angle upon landing and in the running velocity (p > 0.05). The HF group produced significantly smaller deviation phase (DP) of the hip flexion/extension-knee flexion/extension during the braking phase compared with the MF and LF groups (p < 0.05). The DP of the hip flexion/extension-knee flexion/extension during the braking phase correlated negatively with the collision force (p < 0.05). The disparities regarding the flexibility of lower extremity inter-joint coordination were found in high collision forces runners. The efforts of the inter-joint coordination and the risk of running injuries need to be clarified further.

  14. Impact and Estimation of Balance Coordinate System Rotations and Translations in Wind-Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Toro, Kenneth G.; Parker, Peter A.

    2017-01-01

    Discrepancies between the model and balance coordinate systems lead to biases in the aerodynamic measurements during wind-tunnel testing. The reference coordinate system relative to the calibration coordinate system at which the forces and moments are resolved is crucial to the overall accuracy of force measurements. This paper discusses sources of discrepancies and estimates of coordinate system rotation and translation due to machining and assembly differences. A methodology for numerically estimating the coordinate system biases will be discussed and developed. Two case studies are presented using this methodology to estimate the model alignment. Examples span from angle measurement system shifts on the calibration system to discrepancies in actual wind-tunnel data. The results from these case-studies will help aerodynamic researchers and force balance engineers to better the understand and identify potential differences in calibration systems due to coordinate system rotation and translation.

  15. Bimanual Force Coordination in Children with Spastic Unilateral Cerebral Palsy

    ERIC Educational Resources Information Center

    Smits-Engelsman, B. C. M.; Klingels, K.; Feys, H.

    2011-01-01

    In this study bimanual grip-force coordination was quantified using a novel "Gripper" system that records grip forces produced while holding a lower and upper unit, in combination with the lift force necessary to separate these units. Children with unilateral cerebral palsy (CP) (aged 5-14 years, n = 12) were compared to age matched typically…

  16. Some fundamentals regarding kinematics and generalized forces for multibody dynamics

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.

    1990-01-01

    In order to illustrate the various forms in which generalized forces can arise from diverse subsystem analyses in multibody dynamics, intrinsic dynamical equations for the rotational dynamics of a rigid body are derived from Hamilton's principle. Two types of generalized forces are derived: (1) those associated with the virtual rotation vector in some orthogonal basis, and (2) those associated with varying generalized coordinates. As one physical or kinematical result (such as a frequency or a specific direction cosine) cannot rely on this selection, a 'blind' coupling of two models in which generalized forces are calculated in different ways would be wrong. Both types should use the same rotational coordinates and should denote the virtual rotation on a similar basis according to method 1, or in terms of common rotational coordinates and their diversifications as in method 2. Alternatively, the generalized forces and coordinates of one model may be transformed to those of the other.

  17. Let the Force Be with Us: Dyads Exploit Haptic Coupling for Coordination

    ERIC Educational Resources Information Center

    van der Wel, Robrecht P. R. D.; Knoblich, Guenther; Sebanz, Natalie

    2011-01-01

    People often perform actions that involve a direct physical coupling with another person, such as when moving furniture together. Here, we examined how people successfully coordinate such actions with others. We tested the hypothesis that dyads amplify their forces to create haptic information to coordinate. Participants moved a pole (resembling a…

  18. The Falcon and the Trident: Air Force-Navy Airpower Coordination and the New MRC Model

    DTIC Science & Technology

    1994-06-01

    other ships from Australia and New Zealand , quickly placed themselves at NavFE’s disposal. At the same time, MacArthur received orders from the JCS...THE FALCON AND THE TRIDENT: AIR FORCE-NAVY AIRPOWER COORDINATION AND THE NEW MRC MODEL MARK S. HOFFMAN, MAJ, USAF...TITLE AND SUBTITLE The Falcon and The Trident: Air Force-Navy Airpower Coordination and The New MRC Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  19. Coordination of precision grip in 2–6 years-old children with autism spectrum disorders compared to children developing typically and children with developmental disabilities

    PubMed Central

    David, Fabian J.; Baranek, Grace T.; Wiesen, Chris; Miao, Adrienne F.; Thorpe, Deborah E.

    2012-01-01

    Impaired motor coordination is prevalent in children with Autism Spectrum Disorders (ASD) and affects adaptive skills. Little is known about the development of motor patterns in young children with ASD between 2 and 6 years of age. The purpose of the current study was threefold: (1) to describe developmental correlates of motor coordination in children with ASD, (2) to identify the extent to which motor coordination deficits are unique to ASD by using a control group of children with other developmental disabilities (DD), and (3) to determine the association between motor coordination variables and functional fine motor skills. Twenty-four children with ASD were compared to 30 children with typical development (TD) and 11 children with DD. A precision grip task was used to quantify and analyze motor coordination. The motor coordination variables were two temporal variables (grip to load force onset latency and time to peak grip force) and two force variables (grip force at onset of load force and peak grip force). Functional motor skills were assessed using the Fine Motor Age Equivalents of the Vineland Adaptive Behavior Scale and the Mullen Scales of Early Learning. Mixed regression models were used for all analyses. Children with ASD presented with significant motor coordination deficits only on the two temporal variables, and these variables differentiated children with ASD from the children with TD, but not from children with DD. Fine motor functional skills had no statistically significant associations with any of the motor coordination variables. These findings suggest that subtle problems in the timing of motor actions, possibly related to maturational delays in anticipatory feed-forward mechanisms, may underlie some motor deficits reported in children with ASD, but that these issues are not unique to this population. Further research is needed to investigate how children with ASD or DD compensate for motor control deficits to establish functional skills. PMID:23293589

  20. Crystal nucleation and metastable bcc phase in charged colloids: A molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Ji, Xinqiang; Sun, Zhiwei; Ouyang, Wenze; Xu, Shenghua

    2018-05-01

    The dynamic process of homogenous nucleation in charged colloids is investigated by brute-force molecular dynamics simulation. To check if the liquid-solid transition will pass through metastable bcc, simulations are performed at the state points that definitely lie in the phase region of thermodynamically stable fcc. The simulation results confirm that, in all of these cases, the preordered precursors, acting as the seeds of nucleation, always have predominant bcc symmetry consistent with Ostwald's step rule and the Alexander-McTague mechanism. However, the polymorph selection is not straightforward because the crystal structures formed are not often determined by the symmetry of intermediate precursors but have different characters under different state points. The region of the state point where bcc crystal structures of large enough size are formed during crystallization is narrow, which gives a reasonable explanation as to why the metastable bcc phase in charged colloidal suspensions is rarely detected in macroscopic experiments.

  1. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  2. Ab Initio Effective Rovibrational Hamiltonians for Non-Rigid Molecules via Curvilinear VMP2

    NASA Astrophysics Data System (ADS)

    Changala, Bryan; Baraban, Joshua H.

    2017-06-01

    Accurate predictions of spectroscopic constants for non-rigid molecules are particularly challenging for ab initio theory. For all but the smallest systems, ``brute force'' diagonalization of the full rovibrational Hamiltonian is computationally prohibitive, leaving us at the mercy of perturbative approaches. However, standard perturbative techniques, such as second order vibrational perturbation theory (VPT2), are based on the approximation that a molecule makes small amplitude vibrations about a well defined equilibrium structure. Such assumptions are physically inappropriate for non-rigid systems. In this talk, we will describe extensions to curvilinear vibrational Møller-Plesset perturbation theory (VMP2) that account for rotational and rovibrational effects in the molecular Hamiltonian. Through several examples, we will show that this approach provides predictions to nearly microwave accuracy of molecular constants including rotational and centrifugal distortion parameters, Coriolis coupling constants, and anharmonic vibrational and tunneling frequencies.

  3. Bandwidth variable transceivers with artificial neural network-aided provisioning and capacity improvement capabilities in meshed optical networks with cascaded ROADM filtering

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Zhuge, Qunbi; Qiu, Meng; Xiang, Meng; Zhang, Fangyuan; Wu, Baojian; Qiu, Kun; Plant, David V.

    2018-02-01

    We investigate the capacity improvement achieved by bandwidth variable transceivers (BVT) in meshed optical networks with cascaded ROADM filtering at fixed channel spacing, and then propose an artificial neural network (ANN)-aided provisioning scheme to select optimal symbol rate and modulation format for the BVTs in this scenario. Compared with a fixed symbol rate transceiver with standard QAMs, it is shown by both experiments and simulations that BVTs can increase the average capacity by more than 17%. The ANN-aided BVT provisioning method uses parameters monitored from a coherent receiver and then employs a trained ANN to transform these parameters into the desired configuration. It is verified by simulation that the BVT with the proposed provisioning method can approach the upper limit of the system capacity obtained by brute-force search under various degrees of flexibilities.

  4. Chemical reaction mechanisms in solution from brute force computational Arrhenius plots.

    PubMed

    Kazemi, Masoud; Åqvist, Johan

    2015-06-01

    Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies.

  5. Chemical reaction mechanisms in solution from brute force computational Arrhenius plots

    PubMed Central

    Kazemi, Masoud; Åqvist, Johan

    2015-01-01

    Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies. PMID:26028237

  6. Simulation of linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.

    1993-01-01

    A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.

  7. Unsteady flow sensing and optimal sensor placement using machine learning

    NASA Astrophysics Data System (ADS)

    Semaan, Richard

    2016-11-01

    Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.

  8. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  9. Decision and function problems based on boson sampling

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Georgios M.; Brougham, Thomas

    2016-07-01

    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.

  10. An investigation of school violence through Turkish children's drawings.

    PubMed

    Yurtal, Filiz; Artut, Kazim

    2010-01-01

    This study investigates Turkish children's perception of violence in school as represented through drawings and narratives. In all, 66 students (12 to 13 years old) from the middle socioeconomic class participated. To elicit children's perception of violence, they were asked to draw a picture of a violent incident they had heard, experienced, or witnessed. Children mostly drew pictures of violent events among children (33 pictures). Also, there were pictures of violent incidents perpetrated by teachers and directors against children. It was observed that violence influenced children. Violence was mostly depicted in school gardens (38 pictures), but there were violent incidents everywhere, such as in classrooms, corridors, and school stores as well. Moreover, it was found that brute force was the most referred way of violence in the children's depictions (38 pictures). In conclusion, children clearly indicated that there was violence in schools and they were affected by it.

  11. Advances in atmospheric light scattering theory and remote-sensing techniques

    NASA Astrophysics Data System (ADS)

    Videen, Gorden; Sun, Wenbo; Gong, Wei

    2017-02-01

    This issue focuses especially on characterizing particles in the Earth-atmosphere system. The significant role of aerosol particles in this system was recognized in the mid-1970s [1]. Since that time, our appreciation for the role they play has only increased. It has been and continues to be one of the greatest unknown factors in the Earth-atmosphere system as evidenced by the most recent Intergovernmental Panel on Climate Change (IPCC) assessments [2]. With increased computational capabilities, in terms of both advanced algorithms and in brute-force computational power, more researchers have the tools available to address different aspects of the role of aerosols in the atmosphere. In this issue, we focus on recent advances in this topical area, especially the role of light scattering and remote sensing. This issue follows on the heels of four previous topical issues on this subject matter that have graced the pages of this journal [3-6].

  12. Competitive code-based fast palmprint identification using a set of cover trees

    NASA Astrophysics Data System (ADS)

    Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan

    2009-06-01

    A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.

  13. Aspects of warped AdS3/CFT2 correspondence

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang

    2013-04-01

    In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.

  14. Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.

    2017-03-01

    We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.

  15. Remote-sensing image encryption in hybrid domains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong

    2012-04-01

    Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.

  16. Gaussian mass optimization for kernel PCA parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Wang, Zulin

    2011-10-01

    This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.

  17. A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao

    2018-04-01

    A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.

  18. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  19. Neural-network quantum state tomography

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe

    2018-05-01

    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.

  20. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  1. Optical image encryption system using nonlinear approach based on biometric authentication

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Sinha, Aloka

    2017-07-01

    A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.

  2. A linear-RBF multikernel SVM to classify big text corpora.

    PubMed

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  3. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  4. Toward Determining ATPase Mechanism in ABC Transporters: Development of the Reaction Path–Force Matching QM/MM Method

    PubMed Central

    Zhou, Y.; Ojeda-May, P.; Nagaraju, M.; Pu, J.

    2016-01-01

    Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path–force matching (RP–FM) has been developed. In RP–FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP–FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639

  5. Dynamics of neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  6. Dynamics of neural cryptography.

    PubMed

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  7. Dynamics of neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-15

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less

  8. Transport and imaging of brute-force 13C hyperpolarization

    NASA Astrophysics Data System (ADS)

    Hirsch, Matthew L.; Smith, Bryce A.; Mattingly, Mark; Goloshevsky, Artem G.; Rosay, Melanie; Kempf, James G.

    2015-12-01

    We demonstrate transport of hyperpolarized frozen 1-13C pyruvic acid from its site of production to a nearby facility, where a time series of 13C images was acquired from the aqueous dissolution product. Transportability is tied to the hyperpolarization (HP) method we employ, which omits radical electron species used in other approaches that would otherwise relax away the HP before reaching the imaging center. In particular, we attained 13C HP by 'brute-force', i.e., using only low temperature and high-field (e.g., T < ∼2 K and B ∼ 14 T) to pre-polarize protons to a large Boltzmann value (∼0.4% 1H polarization). After polarizing the neat, frozen sample, ejection quickly (<1 s) passed it through a low field (B < 100 G) to establish the 1H pre-polarization spin temperature on 13C via the process known as low-field thermal mixing (yielding ∼0.1% 13C polarization). By avoiding polarization agents (a.k.a. relaxation agents) that are needed to hyperpolarize by the competing method of dissolution dynamic nuclear polarization (d-DNP), the 13C relaxation time was sufficient to transport the sample for ∼10 min before finally dissolving in warm water and obtaining a 13C image of the hyperpolarized, dilute, aqueous product (∼0.01% 13C polarization, a >100-fold gain over thermal signals in the 1 T scanner). An annealing step, prior to polarizing the sample, was also key for increasing T1 ∼ 30-fold during transport. In that time, HP was maintained using only modest cryogenics and field (T ∼ 60 K and B = 1.3 T), for T1(13C) near 5 min. Much greater time and distance (with much smaller losses) may be covered using more-complete annealing and only slight improvements on transport conditions (e.g., yielding T1 ∼ 5 h at 30 K, 2 T), whereas even intercity transfer is possible (T1 > 20 h) at reasonable conditions of 6 K and 2 T. Finally, it is possible to increase the overall enhancement near d-DNP levels (i.e., 102-fold more) by polarizing below 100 mK, where nanoparticle agents are known to hasten T1 buildup by 100-fold, and to yield very little impact on T1 losses at temperatures relevant to transport.

  9. Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.

    2011-12-01

    Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.

  10. Using grasping tasks to evaluate hand force coordination in children with hemiplegic cerebral palsy.

    PubMed

    Mackenzie, Samuel J; Getchell, Nancy; Modlesky, Christopher M; Miller, Freeman; Jaric, Slobodan

    2009-08-01

    Mackenzie SJ, Getchell N, Modlesky CM, Miller F, Jaric S. Using grasping tasks to evaluate hand force coordination in children with hemiplegic cerebral palsy. To assess force coordination in children with hemiplegic cerebral palsy (CP) using a device that allows for testing both unimanual and bimanual manipulation tasks performed under static and dynamic conditions. Nonequivalent groups design. University research laboratory for motor control. Six children with hemiplegic CP (age, mean +/- SD, 11.6+/-1.8 y) and 6 typically developing controls (11.6+/-1.6 y). Not applicable. Children performed simple lifting and force-matching static ramp tasks by way of both unimanual and bimanual pulling using a device that measures grip force (force acting perpendicularly at the digits-device contact area) and load force (tangential force). Main outcome measures were grip/load force ratios (grip force scaling) and correlation coefficients (force coupling). CP subjects showed significantly higher grip/load force ratios (P<.05) and slightly lower correlation coefficients than the control group, with more pronounced differences for most tasks when using their involved hand. For subjects with CP, switching from unimanual to bimanual conditions did not bring changes in scaling or coupling for the involved hand (P>.05). Compared with healthy children, the impaired hand function in the hemiplegic CP pediatric population could be reflected in excessive grip force that is also decoupled from ongoing changes in load force. Therefore, the bimanual grip load device used in this study could provide a sensitive measure of grip force coordination in CP, although nonmotor deficits should be taken into account when asking children to perform more complex tasks.

  11. Stability of Hand Force Production: II. Ascending and Descending Synergies.

    PubMed

    Reschechtko, Sasha; Latash, Mark L

    2018-06-06

    We combined the theory of neural control of movement with referent coordinates and the uncontrolled manifold hypothesis to investigate multi-finger coordination. We tested hypotheses related to stabilization of performance by co-varying control variables, translated into apparent stiffness and referent coordinate, at different levels of an assumed hierarchy of control. Subjects produced an accurate combination of total force and total moment of force with the four fingers under visual feedback on both variables and after feedback was partly or completely removed. The "inverse piano" device was used to estimate control variables. We observed strong synergies in the space of hypothetical control variables which stabilized total force and moment of force, as well as weaker synergies stabilizing individual finger forces; while the former were attenuated by alteration of visual feedback, the latter were much less affected. In addition, we investigated the organization of "ascending synergies" stabilizing task-level control variables by co-varied adjustments of finger-level control variables. We observed inter-trial co-variation of individual fingers' referent coordinates stabilizing hand-level referent coordinate, but observed no such co-variation for apparent stiffness. The observations suggest the existence of both descending and ascending synergies in a hierarchical control system. They confirm a trade-off between synergies at different levels of control and corroborate the hypothesis on specialization of different fingers for the control of force and moment. The results provide strong evidence for the importance of central back-coupling loops in ensuring stability of action.

  12. The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Stevens, B. B.; Forster, P.; Collins, W.; Ramaswamy, V.

    2014-12-01

    The Radiative Forcing Model Intercomparison Project (RFMIP): Assessment and characterization of forcing to enable feedback studies An enormous amount of attention has been paid to the diversity of responses in the CMIP and other multi-model ensembles. This diversity is normally interpreted as a distribution in climate sensitivity driven by some distribution of feedback mechanisms. Identification of these feedbacks relies on precise identification of the forcing to which each model is subject, including distinguishing true error from model diversity. The Radiative Forcing Model Intercomparison Project (RFMIP) aims to disentangle the role of forcing from model sensitivity as determinants of varying climate model response by carefully characterizing the radiative forcing to which such models are subject and by coordinating experiments in which it is specified. RFMIP consists of four activities: 1) An assessment of accuracy in flux and forcing calculations for greenhouse gases under past, present, and future climates, using off-line radiative transfer calculations in specified atmospheres with climate model parameterizations and reference models 2) Characterization and assessment of model-specific historical forcing by anthropogenic aerosols, based on coordinated diagnostic output from climate models and off-line radiative transfer calculations with reference models 3) Characterization of model-specific effective radiative forcing, including contributions of model climatology and rapid adjustments, using coordinated climate model integrations and off-line radiative transfer calculations with a single fast model 4) Assessment of climate model response to precisely-characterized radiative forcing over the historical record, including efforts to infer true historical forcing from patterns of response, by direct specification of non-greenhouse-gas forcing in a series of coordinated climate model integrations This talk discusses the rationale for RFMIP, provides an overview of the four activities, and presents preliminary motivating results.

  13. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well. The robustness of the algorithm with respect to the only free parameter it involves is ascertained. The method is then applied to perform tensile tests on gold nanopillars on strain rates as low as 100/s, bringing out the perils of high strain-rate molecular dynamics calculations. We also calculate temperature and stress dependence of activation free energy for surface nucleation of dislocations in pristine gold nanopillars under realistic loads. While maintaining fully atomistic resolution, we reach the fraction-of-a-second time scale regime. It is found that the activation free energy depends significantly and nonlinearly on the driving force (stress or strain) and temperature, leading to very high activation entropies for surface dislocation nucleation.

  14. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  15. Force Control and Its Relation to Timing. Cognitive Science Program, Technical Report No. 86-4.

    ERIC Educational Resources Information Center

    Keele, Steven W.; And Others

    Timing and speed are suggested to be the two general factors of coordination that differentiate people across a variety of motor movements. This study provides evidence for a third general factor of coordination, that of force control. Subjects that exhibit low variability in reproducing a target force with one effector, such as the finger, show…

  16. Linking initial microstructure and local response during quasistatic granular compaction

    DOE PAGES

    Hurley, R. C.; Lind, J.; Pagan, D. C.; ...

    2017-07-24

    In this study, we performed experiments combining three-dimensional x-ray diffraction and x-ray computed tomography to explore the relationship between microstructure and local force and strain during quasistatic granular compaction. We found that initial void space around a grain and contact coordination number before compaction can be used to predict regions vulnerable to above-average local force and strain at later stages of compaction. We also found correlations between void space around a grain and coordination number, and between grain stress and maximum interparticle force, at all stages of compaction. Finally, we observed grains that fracture to have an above-average initial localmore » void space and a below-average initial coordination number. In conclusion, our findings provide (1) a detailed description of microstructure evolution during quasistatic granular compaction, (2) an approach for identifying regions vulnerable to large values of strain and interparticle force, and (3) methods for identifying regions of a material with large interparticle forces and coordination numbers from measurements of grain stress and local porosity.« less

  17. BRIEF COMMUNICATION: A note on the Coulomb collision operator in curvilinear coordinates

    NASA Astrophysics Data System (ADS)

    Goncharov, P. R.

    2010-10-01

    The dynamic friction force, diffusion tensor, flux density in velocity space and Coulomb collision term are expressed in curvilinear coordinates via Trubnikov potential functions corresponding to each species of a background plasma. For comparison, explicit formulae are given for the dynamic friction force, diffusion tensor and collisional flux density in velocity space in curvilinear coordinates via Rosenbluth potential functions summed over all species of the background plasma.

  18. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  19. Social forces for team coordination in ball possession game

    NASA Astrophysics Data System (ADS)

    Yokoyama, Keiko; Shima, Hiroyuki; Fujii, Keisuke; Tabuchi, Noriyuki; Yamamoto, Yuji

    2018-02-01

    Team coordination is a basic human behavioral trait observed in many real-life communities. To promote teamwork, it is important to cultivate social skills that elicit team coordination. In the present work, we consider which social skills are indispensable for individuals performing a ball possession game in soccer. We develop a simple social force model that describes the synchronized motion of offensive players. Comparing the simulation results with experimental observations, we uncovered that the cooperative social force, a measure of perception skill, has the most important role in reproducing the harmonized collective motion of experienced players in the task. We further developed an experimental tool that facilitates real players' perceptions of interpersonal distance, revealing that the tool improves novice players' motions as if the cooperative social force were imposed.

  20. Asymmetric interlimb transfer of concurrent adaptation to opposing dynamic forces

    PubMed Central

    Miall, R. C.; Woolley, D. G.

    2007-01-01

    Interlimb transfer of a novel dynamic force has been well documented. It has also been shown that unimanual adaptation to opposing novel environments is possible if they are associated with different workspaces. The main aim of this study was to test if adaptation to opposing velocity dependent viscous forces with one arm could improve the initial performance of the other arm. The study also examined whether this interlimb transfer occurred across an extrinsic, spatial, coordinative system or an intrinsic, joint based, coordinative system. Subjects initially adapted to opposing viscous forces separated by target location. Our measure of performance was the correlation between the speed profiles of each movement within a force condition and an ‘average’ trajectory within null force conditions. Adaptation to the opposing forces was seen during initial acquisition with a significantly improved coefficient in epoch eight compared to epoch one. We then tested interlimb transfer from the dominant to non-dominant arm (D → ND) and vice-versa (ND → D) across either an extrinsic or intrinsic coordinative system. Interlimb transfer was only seen from the dominant to the non-dominant limb across an intrinsic coordinative system. These results support previous studies involving adaptation to a single dynamic force but also indicate that interlimb transfer of multiple opposing states is possible. This suggests that the information available at the level of representation allowing interlimb transfer can be more intricate than a general movement goal or a single perceived directional error. PMID:17703286

  1. The Effects of Forced Coordination on Organizational Interrelationships and Services to Clients.

    ERIC Educational Resources Information Center

    Mahoney, Kevin J.

    The mechanisms and sub-processes of forced coordination and their effects on interorganizational relationships and on services delivered to elderly and disabled clients in a rural community were examined. Participant observations gathered over 18 months and buttressed with information available from historical and case records from the…

  2. A chaotic cryptosystem for images based on Henon and Arnold cat map.

    PubMed

    Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.

  3. Method to measure efficiently rare fluctuations of turbulence intensity for turbulent-laminar transitions in pipe flows

    NASA Astrophysics Data System (ADS)

    Nemoto, Takahiro; Alexakis, Alexandros

    2018-02-01

    The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.

  4. Diagnosing the decline in pharmaceutical R&D efficiency.

    PubMed

    Scannell, Jack W; Blanckley, Alex; Boldon, Helen; Warrington, Brian

    2012-03-01

    The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.

  5. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  6. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  7. Artificial consciousness and the consciousness-attention dissociation.

    PubMed

    Haladjian, Harry Haroutioun; Montemayor, Carlos

    2016-10-01

    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  9. Verification Test of Automated Robotic Assembly of Space Truss Structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1995-01-01

    A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.

  10. Development and verification testing of automation and robotics for assembly of space structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1993-01-01

    A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.

  11. Three recipes for improving the image quality with optical long-baseline interferometers: BFMC, LFF, and DPSC

    NASA Astrophysics Data System (ADS)

    Millour, Florentin A.; Vannier, Martin; Meilland, Anthony

    2012-07-01

    We present here three recipes for getting better images with optical interferometers. Two of them, Low- Frequencies Filling and Brute-Force Monte Carlo were used in our participation to the Interferometry Beauty Contest this year and can be applied to classical imaging using V2 and closure phases. These two addition to image reconstruction provide a way of having more reliable images. The last recipe is similar in its principle as the self-calibration technique used in radio-interferometry. We call it also self-calibration, but it uses the wavelength-differential phase as a proxy of the object phase to build-up a full-featured complex visibility set of the observed object. This technique needs a first image-reconstruction run with an available software, using closure-phases and squared visibilities only. We used it for two scientific papers with great success. We discuss here the pros and cons of such imaging technique.

  12. Load Balancing Strategies for Multiphase Flows on Structured Grids

    NASA Astrophysics Data System (ADS)

    Olshefski, Kristopher; Owkes, Mark

    2017-11-01

    The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.

  13. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  14. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  15. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  16. A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map

    PubMed Central

    Sundararajan, Elankovan

    2014-01-01

    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724

  17. Free energy surface of an intrinsically disordered protein: comparison between temperature replica exchange molecular dynamics and bias-exchange metadynamics.

    PubMed

    Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain

    2015-06-09

    Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.

  18. Conference on Standards for the Interoperability of Defense Simulations (2nd) Held in Orlando, Florida on 15-17 January 1990. Volume 3. Position Papers

    DTIC Science & Technology

    1990-01-01

    major part of Europe and include We recommend adoption of a Cartesian geocentric participation by Army, Air Force and Navy forces. In coordinate...The coordinate system chosen is the World Geodetic surface. For example, a location on a beach may be System, an Earth-centered ( geocentric ), Earth...toPlane Is W Figure 4. Universal Polar Stereographic (UPS) Projection 3. 0 ASSUMPTIONS 1. Geocentric coordinates: Earth geodetic centered, Earth-fixed

  19. Development of a coordinate measuring machine (CMM) touch probe using a multi-axis force sensor

    NASA Astrophysics Data System (ADS)

    Park, Jae-jun; Kwon, Kihwan; Cho, Nahmgyoo

    2006-09-01

    Traditional touch trigger probes are widely used on most commercial coordinate measuring machines (CMMs). However, the CMMs with these probes have a systematic error due to the shape of the probe tip and elastic deformation of the stylus resulting from contact pressure with the specimen. In this paper, a new touch probe with a three degrees-of-freedom force sensor is proposed. From relationships between an obtained contact force vector and the geometric shape of the probe, it is possible to calculate the coordinates of the exact probe-specimen contact points. An empirical model of the probe is applied to calculate the coordinates of the contact points and the amount of pretravel. With the proposed probing system, the measuring error induced by the indeterminateness of the probe-specimen contact point and the pretravel can be estimated and compensated for successfully.

  20. Archimedes' Principle in General Coordinates

    ERIC Educational Resources Information Center

    Ridgely, Charles T.

    2010-01-01

    Archimedes' principle is well known to state that a body submerged in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the body. Herein, Archimedes' principle is derived from first principles by using conservation of the stress-energy-momentum tensor in general coordinates. The resulting expression for the force is…

  1. The force pyramid: a spatial analysis of force application during virtual reality brain tumor resection.

    PubMed

    Azarnoush, Hamed; Siar, Samaneh; Sawaya, Robin; Zhrani, Gmaan Al; Winkler-Schwartz, Alexander; Alotaibi, Fahad Eid; Bugdadi, Abdulgadir; Bajunaid, Khalid; Marwa, Ibrahim; Sabbagh, Abdulrahman Jafar; Del Maestro, Rolando F

    2017-07-01

    OBJECTIVE Virtual reality simulators allow development of novel methods to analyze neurosurgical performance. The concept of a force pyramid is introduced as a Tier 3 metric with the ability to provide visual and spatial analysis of 3D force application by any instrument used during simulated tumor resection. This study was designed to answer 3 questions: 1) Do study groups have distinct force pyramids? 2) Do handedness and ergonomics influence force pyramid structure? 3) Are force pyramids dependent on the visual and haptic characteristics of simulated tumors? METHODS Using a virtual reality simulator, NeuroVR (formerly NeuroTouch), ultrasonic aspirator force application was continually assessed during resection of simulated brain tumors by neurosurgeons, residents, and medical students. The participants performed simulated resections of 18 simulated brain tumors with different visual and haptic characteristics. The raw data, namely, coordinates of the instrument tip as well as contact force values, were collected by the simulator. To provide a visual and qualitative spatial analysis of forces, the authors created a graph, called a force pyramid, representing force sum along the z-coordinate for different xy coordinates of the tool tip. RESULTS Sixteen neurosurgeons, 15 residents, and 84 medical students participated in the study. Neurosurgeon, resident and medical student groups displayed easily distinguishable 3D "force pyramid fingerprints." Neurosurgeons had the lowest force pyramids, indicating application of the lowest forces, followed by resident and medical student groups. Handedness, ergonomics, and visual and haptic tumor characteristics resulted in distinct well-defined 3D force pyramid patterns. CONCLUSIONS Force pyramid fingerprints provide 3D spatial assessment displays of instrument force application during simulated tumor resection. Neurosurgeon force utilization and ergonomic data form a basis for understanding and modulating resident force application and improving patient safety during tumor resection.

  2. Proximal arm kinematics affect grip force-load force coordination

    PubMed Central

    Vermillion, Billy C.; Lum, Peter S.

    2015-01-01

    During object manipulation, grip force is coordinated with load force, which is primarily determined by object kinematics. Proximal arm kinematics may affect grip force control, as proximal segment motion could affect control of distal hand muscles via biomechanical and/or neural pathways. The aim of this study was to investigate the impact of proximal kinematics on grip force modulation during object manipulation. Fifteen subjects performed three vertical lifting tasks that involved distinct proximal kinematics (elbow/shoulder), but resulted in similar end-point (hand) trajectories. While temporal coordination of grip and load forces remained similar across the tasks, proximal kinematics significantly affected the grip force-to-load force ratio (P = 0.042), intrinsic finger muscle activation (P = 0.045), and flexor-extensor ratio (P < 0.001). Biomechanical coupling between extrinsic hand muscles and the elbow joint cannot fully explain the observed changes, as task-related changes in intrinsic hand muscle activation were greater than in extrinsic hand muscles. Rather, between-task variation in grip force (highest during task 3) appears to contrast to that in shoulder joint velocity/acceleration (lowest during task 3). These results suggest that complex neural coupling between the distal and proximal upper extremity musculature may affect grip force control during movements, also indicated by task-related changes in intermuscular coherence of muscle pairs, including intrinsic finger muscles. Furthermore, examination of the fingertip force showed that the human motor system may attempt to reduce variability in task-relevant motor output (grip force-to-load force ratio), while allowing larger fluctuations in output less relevant to task goal (shear force-to-grip force ratio). PMID:26289460

  3. Changes in Muscle and Joint Coordination in Learning to Direct Forces

    PubMed Central

    Hasson, Christopher J.; Caldwell, Graham E.; van Emmerik, Richard E.A.

    2008-01-01

    While it has been suggested that biarticular muscles have a specialized role in directing external reaction forces, it is unclear how humans learn to coordinate mono- and bi-articular muscles to perform force-directing tasks. Subjects were asked to direct pedal forces in a specified target direction during one-legged cycling. We expected that with practice, performance improvement would be associated with specific changes in joint torque patterns and mono- and bi-articular muscular coordination. Nine male subjects practiced pedaling an ergometer with only their left leg, and were instructed to always direct their applied pedal force perpendicular to the crank arm (target direction) and to maintain a constant pedaling speed. After a single practice session, the mean error between the applied and target pedal force directions decreased significantly. This improved performance was accompanied by a significant decrease in the amount of ankle angular motion and a smaller increase in knee and hip angular motion. This coincided with a re-organization of lower extremity joint torques, with a decrease in ankle plantarflexor torque and an increase in knee and hip flexor torques. Changes were seen in both mono- and bi-articular muscle activity patterns. The monoarticular muscles exhibited greater alterations, and appeared to contribute to both mechanical work and force directing. With practice, a loosening of the coupling between biarticular thigh muscle activation and joint torque co-regulation was observed. The results demonstrated that subjects were able to learn a complex and dynamic force-directing task by changing the direction of their applied pedal forces through re-organization of joint torque patterns and mono- and bi-articular muscle coordination. PMID:18405988

  4. Changes in muscle and joint coordination in learning to direct forces.

    PubMed

    Hasson, Christopher J; Caldwell, Graham E; van Emmerik, Richard E A

    2008-08-01

    While it has been suggested that bi-articular muscles have a specialized role in directing external reaction forces, it is unclear how humans learn to coordinate mono- and bi-articular muscles to perform force-directing tasks. Participants were asked to direct pedal forces in a specified target direction during one-legged cycling. We expected that with practice, performance improvement would be associated with specific changes in joint torque patterns and mono- and bi-articular muscular coordination. Nine male participants practiced pedaling an ergometer with only their left leg, and were instructed to always direct their applied pedal force perpendicular to the crank arm (target direction) and to maintain a constant pedaling speed. After a single practice session, the mean error between the applied and target pedal force directions decreased significantly. This improved performance was accompanied by a significant decrease in the amount of ankle angular motion and a smaller increase in knee and hip angular motion. This coincided with a re-organization of lower extremity joint torques, with a decrease in ankle plantarflexor torque and an increase in knee and hip flexor torques. Changes were seen in both mono- and bi-articular muscle activity patterns. The mono-articular muscles exhibited greater alterations, and appeared to contribute to both mechanical work and force-directing. With practice, a loosening of the coupling between bi-articular thigh muscle activation and joint torque co-regulation was observed. The results demonstrated that participants were able to learn a complex and dynamic force-directing task by changing the direction of their applied pedal forces through re-organization of joint torque patterns and mono- and bi-articular muscle coordination.

  5. Grip force coordination during bimanual tasks in unilateral cerebral palsy.

    PubMed

    Islam, Mominul; Gordon, Andrew M; Sköld, Annika; Forssberg, Hans; Eliasson, Ann-Christin

    2011-10-01

    The aim of the study was to investigate coordination of fingertip forces during an asymmetrical bimanual task in children with unilateral cerebral palsy (CP). Twelve participants (six males, six females; mean age 14y 4mo, SD 3.3y; range 9-20y;) with unilateral CP (eight right-sided, four left-sided) and 15 age-matched typically developing participants (five males, 10 females; mean age 14y 3mo, SD 2.9y; range 9-18y,) were included. Participants were instructed to hold custom-made grip devices in each hand and place one device on top of the other. The grip force and load force were recorded simultaneously in both hands. Temporal coordination between the two hands was impaired in the participants with CP (compared with that in typically developing participants), that is they initiated the task by decreasing grip force in the releasing hand before increasing the force in the holding hand. The grip force increase in the holding hand was also smaller in participants with CP (involved hand/non-dominant hand releasing, p<0.001; non-involved hand/dominant hand releasing, p=0.007), indicating deficient scaling of force amplitude. The impairment was greater when participants with CP used their non-involved hand as the holding hand. Temporal coordination and scaling of fingertip forces were impaired in both hands in participants with CP. The non-involved hand was strongly affected by activity in the involved hand, which may explain why children with unilateral CP prefer to use only one hand during tasks that are typically performed with both hands. © The Authors. Developmental Medicine & Child Neurology © 2011 Mac Keith Press.

  6. The Sensorimotor System Can Sculpt Behaviorally Relevant Representations for Motor Learning

    PubMed Central

    2016-01-01

    Abstract The coordinate system in which humans learn novel motor skills is controversial. The representation of sensorimotor skills has been extensively studied by examining generalization after learning perturbations specifically designed to be ambiguous as to their coordinate system. Recent studies have found that learning is not represented in any simple coordinate system and can potentially be accounted for by a mixed representation. Here, instead of probing generalization, which has led to conflicting results, we examine whether novel dynamics can be learned when explicitly and unambiguously presented in particular coordinate systems. Subjects performed center–out reaches to targets in the presence of a force field, while varying the orientation of their hand (i.e., the wrist angle) across trials. Different groups of subjects experienced force fields that were explicitly presented either in Cartesian coordinates (field independent of hand orientation), in object coordinates (field rotated with hand orientation), or in anti-object coordinates (field rotated counter to hand orientation). Subjects learned to represent the dynamics when presented in either Cartesian or object coordinates, learning these as well as an ambiguous force field. However, learning was slower for the object-based dynamics and substantially impaired for the anti-object presentation. Our results show that the motor system is able to tune its representation to at least two natural coordinate systems but is impaired when the representation of the task does not correspond to a behaviorally relevant coordinate system. Our results show that the motor system can sculpt its representation through experience to match those of natural tasks. PMID:27588304

  7. Coordination of hand shape.

    PubMed

    Pesyna, Colin; Pundi, Krishna; Flanders, Martha

    2011-03-09

    The neural control of hand movement involves coordination of the sensory, motor, and memory systems. Recent studies have documented the motor coordinates for hand shape, but less is known about the corresponding patterns of somatosensory activity. To initiate this line of investigation, the present study characterized the sense of hand shape by evaluating the influence of differences in the amount of grasping or twisting force, and differences in forearm orientation. Human subjects were asked to use the left hand to report the perceived shape of the right hand. In the first experiment, six commonly grasped items were arranged on the table in front of the subject: bottle, doorknob, egg, notebook, carton, and pan. With eyes closed, subjects used the right hand to lightly touch, forcefully support, or imagine holding each object, while 15 joint angles were measured in each hand with a pair of wired gloves. The forces introduced by supporting or twisting did not influence the perceptual report of hand shape, but for most objects, the report was distorted in a consistent manner by differences in forearm orientation. Subjects appeared to adjust the intrinsic joint angles of the left hand, as well as the left wrist posture, so as to maintain the imagined object in its proper spatial orientation. In a second experiment, this result was largely replicated with unfamiliar objects. Thus, somatosensory and motor information appear to be coordinated in an object-based, spatial-coordinate system, sensitive to orientation relative to gravitational forces, but invariant to grasp forcefulness.

  8. Impact-Actuated Digging Tool for Lunar Excavation

    NASA Technical Reports Server (NTRS)

    Wilson, Jak; Chu, Philip; Craft, Jack; Zacny, Kris; Santoro, Chris

    2013-01-01

    NASA s plans for a lunar outpost require extensive excavation. The Lunar Surface Systems Project Office projects that thousands of tons of lunar soil will need to be moved. Conventional excavators dig through soil by brute force, and depend upon their substantial weight to react to the forces generated. This approach will not be feasible on the Moon for two reasons: (1) gravity is 1/6th that on Earth, which means that a kg on the Moon will supply 1/6 the down force that it does on Earth, and (2) transportation costs (at the time of this reporting) of $50K to $100K per kg make massive excavators economically unattractive. A percussive excavation system was developed for use in vacuum or nearvacuum environments. It reduces the down force needed for excavation by an order of magnitude by using percussion to assist in soil penetration and digging. The novelty of this excavator is that it incorporates a percussive mechanism suited to sustained operation in a vacuum environment. A percussive digger breadboard was designed, built, and successfully tested under both ambient and vacuum conditions. The breadboard was run in vacuum to more than 2..times the lifetime of the Apollo Lunar Surface Drill, throughout which the mechanism performed and held up well. The percussive digger was demonstrated to reduce the force necessary for digging in lunar soil simulant by an order of magnitude, providing reductions as high as 45:1. This is an enabling technology for lunar site preparation and ISRU (In Situ Resource Utilization) mining activities. At transportation costs of $50K to $100K per kg, reducing digging forces by an order of magnitude translates into billions of dollars saved by not launching heavier systems to accomplish excavation tasks necessary to the establishment of a lunar outpost. Applications on the lunar surface include excavation for habitats, construction of roads, landing pads, berms, foundations, habitat shielding, and ISRU.

  9. Structure, initial excited-state relaxation, and energy storage of rhodopsin resolved at the multiconfigurational perturbation theory level

    PubMed Central

    Andruniów, Tadeusz; Ferré, Nicolas; Olivucci, Massimo

    2004-01-01

    We demonstrate that a “brute force” quantum chemical calculation based on an ab initio multiconfigurational second order perturbation theory approach implemented in a quantum mechanics/molecular mechanics strategy can be applied to the investigation of the excited state of the visual pigment rhodopsin (Rh) with a computational error <5 kcal·mol-1. As a consequence, the simulation of the absorption and fluorescence of Rh and its retinal chromophore in solution allows for a nearly quantitative analysis of the factors determining the properties of the protein environment. More specifically, we demonstrate that the Rh environment is more similar to the “gas phase” than to the solution environment and that the so-called “opsin shift” originates from the inability of the solvent to effectively “shield” the chromophore from its counterion. The same strategy is used to investigate three transient structures involved in the photoisomerization of Rh under the assumption that the protein cavity does not change shape during the reaction. Accordingly, the analysis of the initially relaxed excited-state structure, the conical intersection driving the excited-state decay, and the primary isolable bathorhodopsin intermediate supports a mechanism where the photoisomerization coordinate involves a “motion” reminiscent of the so-called bicycle-pedal reaction coordinate. Most importantly, it is shown that the mechanism of the ∼30 kcal·mol-1 photon energy storage observed for Rh is not consistent with a model based exclusively on the change of the electrostatic interaction of the chromophore with the protein/counterion environment. PMID:15604139

  10. Linear magnetic spring and spring/motor combination

    NASA Technical Reports Server (NTRS)

    Patt, Paul J. (Inventor); Stolfi, Fred R. (Inventor)

    1991-01-01

    A magnetic spring, or a spring and motor combination, providing a linear spring force characteristic in each direction from a neutral position, in which the spring action may occur for any desired coordinate of a typical orthogonal coordinate system. A set of magnets are disposed, preferably symmetrically about a coordinate axis, poled orthogonally to the desired force direction. A second set of magnets, respectively poled opposite the first set, are arranged on the sprung article. The magnets of one of the sets are spaced a greater distance apart than those of the other, such that an end magnet from each set forms a pair having preferably planar faces parallel to the direction of spring force, the faces being offset so that in a neutral position the outer edge of the closer spaced magnet set is aligned with the inner edge of the greater spaced magnet set. For use as a motor, a coil can be arranged with conductors orthogonal to both the magnet pole directions and the direction of desired spring force, located across from the magnets of one set and fixed with respect to the magnets of the other set. In a cylindrical coordinate system having axial spring force, the magnets are radially poled and motor coils are concentric with the cylinder axis.

  11. Toward an Increased Understanding of the Singularity Expansion Method.

    DTIC Science & Technology

    1980-12-01

    distribution unlimited. DTIC AIR-FORCE WEAPONS LABORATORY ELECTE Air Force Systems Command SJUL 16 18 OmKirtland Air Force Base, NM 87117 S B 82 07...EMP community. This report does not address the applications of the method to EMP system effects, rather we elaborate on the contribution of SEM to an...IL (10) E S where SL =Lim far [GVOE - OEGr2 sinO d~do (11) 15 .1 with 0, , and r being spherical coordinates defined in a coordinate system having

  12. Autonomous navigation system. [gyroscopic pendulum for air navigation

    NASA Technical Reports Server (NTRS)

    Merhav, S. J. (Inventor)

    1981-01-01

    An inertial navigation system utilizing a servo-controlled two degree of freedom pendulum to obtain specific force components in the locally level coordinate system is described. The pendulum includes a leveling gyroscope and an azimuth gyroscope supported on a two gimbal system. The specific force components in the locally level coordinate system are converted to components in the geographical coordinate system by means of a single Euler transformation. The standard navigation equations are solved to determine longitudinal and lateral velocities. Finally, vehicle position is determined by a further integration.

  13. Closed Analytic Solution for the Potential and Equations of Motion in the Presence of a Gravitating Oblate Spheroid

    NASA Astrophysics Data System (ADS)

    Atkinson, William

    2008-10-01

    A closed analytic solution for the potential due to a gravitating solid oblate spheroid, derived in oblate spheroidal coordinates in this paper, is shown to be much simpler than those obtained either in cylindrical coordinates (MacMillan) or in spherical coordinates (McCullough). The derivation in oblate spheroidal coordinates is also much simpler to follow than those of the MacMillan or McCullough. The potential solution is applied in exacting a closed solution for the equations of motion for an object rolling on the surface of the spheroid subjected only to the gravitational force component tangential to the surface of the spheroid. The exact solution was made possible by the fact that the force can be represented as separable functions of the coordinates only in oblate spheroidal coordinates. The derivation is a good demonstration of the use of curvilinear coordinates to problems in classical mechanics, potential theory, and mathematical physics for both undergraduate and graduate students.

  14. Effects of Visual Feedback and Memory on Unintentional Drifts in Performance During Finger Pressing Tasks

    PubMed Central

    Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.

    2017-01-01

    This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396

  15. Coordination strategies for limb forces during weight-bearing locomotion in normal rats, and in rats spinalized as neonates

    PubMed Central

    Giszter, Simon F; Davies, Michelle R; Graziani, Virginia

    2010-01-01

    Some rats spinally transected as neonates (ST rats) achieve weight-supporting independent locomotion. The mechanisms of coordinated hindlimb weight support in such rats are not well understood. To examine these in such ST rats and normal rats, rats with better than 60% of weight supported steps on a treadmill as adults were trained to cross an instrumented runway. Ground reaction forces, coordination of hindlimb and forelimb forces and the motions of the center of pressure were assessed. Normal rats crossed the runway with a diagonal trot. On average hindlimbs bore about 80% of the vertical load carried by forelimbs, although this varied. Forelimbs and hindlimb acted synergistically to generate decelerative and propulsive rostrocaudal forces, which averaged 15% of body weight with maximums of 50% . Lateral forces were very small (<8% of body weight). Center of pressure progressed in jumps along a straight line with mean lateral deviations <1 cm. ST rats hindlimbs bore about 60% of the vertical load of forelimbs, significantly less compared to intact (p<0.05). ST rats showed similar mean rostrocaudal forces, but with significantly larger maximum fluctuations of up to 80% of body weight (p<0.05). Joint force-plate recordings showed forelimbs and hindlimb rostrocaudal forces in ST rats were opposing and significantly different from intact rats (p<0.05). Lateral forces were ~20% of body weight and significantly larger than in normal rats (p<0.05). Center of pressure zig-zagged, with mean lateral deviations of ~ 2cm and a significantly larger range (p<0.05). The haunches were also observed to roll more than normal rats. The locomotor strategy of injured rats using limbs in opposition was presumably less efficient but their complex gait was statically stable. Because forelimbs and hindlimbs acted in opposition, the trunk was held compressed. Force coordination was likely managed largely by the voluntary control in forelimbs and trunk. PMID:18612631

  16. Free energy from molecular dynamics with multiple constraints

    NASA Astrophysics Data System (ADS)

    den Otter, W. K.; Briels, W. J.

    In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.

  17. Grip Force Coordination during Bimanual Tasks in Unilateral Cerebral Palsy

    ERIC Educational Resources Information Center

    Islam, Mominul; Gordon, Andrew M.; Skold, Annika; Forssberg, Hans; Eliasson, Ann-Christin

    2011-01-01

    Aim: The aim of the study was to investigate coordination of fingertip forces during an asymmetrical bimanual task in children with unilateral cerebral palsy (CP). Method: Twelve participants (six males, six females; mean age 14y 4mo, SD 3.3y; range 9-20y;) with unilateral CP (eight right-sided, four left-sided) and 15 age-matched typically…

  18. Grip Force Control Is Dependent on Task Constraints in Children with and without Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Law, Sui-Heung; Lo, Sing Kai; Chow, Susanna; Cheing, Gladys L.Y.

    2011-01-01

    Excessive grip force (GF) is often found in children with developmental coordination disorder (DCD). However, their GF control may vary when task constraints are imposed upon their motor performance. This study aimed to investigate how their GF control changes in response to task demands, and to examine their tactile sensitivity. Twenty-one…

  19. Coordinating Military Response to Disasters

    DTIC Science & Technology

    2016-01-22

    of two noted natural disasters . Section four analyzes the two options of the affected area National Guard forces and the tailored regional located...recommendations and conclusions. Title Coordinating Military Response to Disasters Thesis Military response to natural disasters is a critical aspect...National Guard forces in response to natural disasters and man-made emergencies such as riots or terrorist attacks.13 The third role is federal

  20. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    EPA Science Inventory

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  1. Internal force corrections with machine learning for quantum mechanics/molecular mechanics simulations.

    PubMed

    Wu, Jingheng; Shen, Lin; Yang, Weitao

    2017-10-28

    Ab initio quantum mechanics/molecular mechanics (QM/MM) molecular dynamics simulation is a useful tool to calculate thermodynamic properties such as potential of mean force for chemical reactions but intensely time consuming. In this paper, we developed a new method using the internal force correction for low-level semiempirical QM/MM molecular dynamics samplings with a predefined reaction coordinate. As a correction term, the internal force was predicted with a machine learning scheme, which provides a sophisticated force field, and added to the atomic forces on the reaction coordinate related atoms at each integration step. We applied this method to two reactions in aqueous solution and reproduced potentials of mean force at the ab initio QM/MM level. The saving in computational cost is about 2 orders of magnitude. The present work reveals great potentials for machine learning in QM/MM simulations to study complex chemical processes.

  2. Learning to combine high variability with high precision: lack of transfer to a different task.

    PubMed

    Wu, Yen-Hsun; Truglio, Thomas S; Zatsiorsky, Vladimir M; Latash, Mark L

    2015-01-01

    The authors studied effects of practicing a 4-finger accurate force production task on multifinger coordination quantified within the uncontrolled manifold hypothesis. During practice, task instability was modified by changing visual feedback gain based on accuracy of performance. The authors also explored the retention of these effects, and their transfer to a prehensile task. Subjects practiced the force production task for 2 days. After the practice, total force variability decreased and performance became more accurate. In contrast, variance of finger forces showed a tendency to increase during the first practice session while in the space of finger modes (hypothetical commands to fingers) the increase was under the significance level. These effects were retained for 2 weeks. No transfer of these effects to the prehensile task was seen, suggesting high specificity of coordination changes. The retention of practice effects without transfer to a different task suggests that further studies on a more practical method of improving coordination are needed.

  3. Exhaustively sampling peptide adsorption with metadynamics.

    PubMed

    Deighan, Michael; Pfaendtner, Jim

    2013-06-25

    Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.

  4. Press touch code: A finger press based screen size independent authentication scheme for smart devices.

    PubMed

    Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z

    2017-01-01

    Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.

  5. Press touch code: A finger press based screen size independent authentication scheme for smart devices

    PubMed Central

    Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.

    2017-01-01

    Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262

  6. Effects of Ving Tsun Chinese Martial Art Training on Upper Extremity Muscle Strength and Eye-Hand Coordination in Community-Dwelling Middle-Aged and Older Adults: A Pilot Study.

    PubMed

    Fong, Shirley S M; Ng, Shamay S M; Cheng, Yoyo T Y; Wong, Janet Y H; Yu, Esther Y T; Chow, Gary C C; Chak, Yvonne T C; Chan, Ivy K Y; Zhang, Joni; Macfarlane, Duncan; Chung, Louisa M Y

    2016-01-01

    Objectives. To evaluate the effects of Ving Tsun (VT) martial art training on the upper extremity muscle strength and eye-hand coordination of middle-aged and older adults. Methods. This study used a nonequivalent pretest-posttest control group design. Forty-two community-dwelling healthy adults participated in the study; 24 (mean age ± SD = 68.5 ± 6.7 years) underwent VT training for 4 weeks (a supervised VT session twice a week, plus daily home practice), and 18 (mean age ± SD = 72.0 ± 6.7 years) received no VT training and acted as controls. Shoulder and elbow isometric muscle strength and eye-hand coordination were evaluated using the Lafayette Manual Muscle Test System and a computerized finger-pointing test, respectively. Results. Elbow extensor peak force increased by 13.9% (P = 0.007) in the VT group and the time to reach peak force decreased (9.9%) differentially in the VT group compared to the control group (P = 0.033). For the eye-hand coordination assessment outcomes, reaction time increased by 2.9% in the VT group and decreased by 5.3% in the control group (P = 0.002). Conclusions. Four weeks of VT training could improve elbow extensor isometric peak force and the time to reach peak force but not eye-hand coordination in community-dwelling middle-aged and older adults.

  7. Effects of Ving Tsun Chinese Martial Art Training on Upper Extremity Muscle Strength and Eye-Hand Coordination in Community-Dwelling Middle-Aged and Older Adults: A Pilot Study

    PubMed Central

    Ng, Shamay S. M.; Cheng, Yoyo T. Y.; Yu, Esther Y. T.; Chow, Gary C. C.; Chak, Yvonne T. C.; Chan, Ivy K. Y.; Zhang, Joni; Macfarlane, Duncan

    2016-01-01

    Objectives. To evaluate the effects of Ving Tsun (VT) martial art training on the upper extremity muscle strength and eye-hand coordination of middle-aged and older adults. Methods. This study used a nonequivalent pretest-posttest control group design. Forty-two community-dwelling healthy adults participated in the study; 24 (mean age ± SD = 68.5 ± 6.7 years) underwent VT training for 4 weeks (a supervised VT session twice a week, plus daily home practice), and 18 (mean age ± SD = 72.0 ± 6.7 years) received no VT training and acted as controls. Shoulder and elbow isometric muscle strength and eye-hand coordination were evaluated using the Lafayette Manual Muscle Test System and a computerized finger-pointing test, respectively. Results. Elbow extensor peak force increased by 13.9% (P = 0.007) in the VT group and the time to reach peak force decreased (9.9%) differentially in the VT group compared to the control group (P = 0.033). For the eye-hand coordination assessment outcomes, reaction time increased by 2.9% in the VT group and decreased by 5.3% in the control group (P = 0.002). Conclusions. Four weeks of VT training could improve elbow extensor isometric peak force and the time to reach peak force but not eye-hand coordination in community-dwelling middle-aged and older adults. PMID:27525020

  8. Force coordination in static manipulation tasks performed using standard and non-standard grasping techniques.

    PubMed

    de Freitas, Paulo B; Jaric, Slobodan

    2009-04-01

    We evaluated coordination of the hand grip force (GF; normal component of the force acting at the hand-object contact area) and load force (LF; the tangential component) in a variety of grasping techniques and two LF directions. Thirteen participants exerted a continuous sinusoidal LF pattern against externally fixed handles applying both standard (i.e., using either the tips of the digits or the palms; the precision and palm grasps, respectively) and non-standard grasping techniques (using wrists and the dorsal finger areas; the wrist and fist grasp). We hypothesized (1) that the non-standard grasping techniques would provide deteriorated indices of force coordination when compared with the standard ones, and (2) that the nervous system would be able to adjust GF to the differences in friction coefficients of various skin areas used for grasping. However, most of the indices of force coordination remained similar across the tested grasping techniques, while the GF adjustments for the differences in friction coefficients (highest in the palm and the lowest in the fist and wrist grasp) provided inconclusive results. As hypothesized, GF relative to the skin friction was lowest in the precision grasp, but highest in the palm grasp. Therefore, we conclude that (1) the elaborate coordination of GF and LF consistently seen across the standard grasping techniques could be generalized to the non-standard ones, while (2) the ability to adjust GF using the same grasping technique to the differences in friction of various objects cannot be fully generalized to the GF adjustment when different grasps (i.e., hand segments) are used to manipulate the same object. Due to the importance of the studied phenomena for understanding both the functional and neural control aspects of manipulation, future studies should extend the current research to the transient and dynamic tasks, as well as to the general role of friction in our mechanical interactions with the environment.

  9. Force illusions and drifts observed during muscle vibration.

    PubMed

    Reschechtko, Sasha; Cuadra, Cristian; Latash, Mark L

    2018-01-01

    We explored predictions of a scheme that views position and force perception as a result of measuring proprioceptive signals within a reference frame set by ongoing efferent process. In particular, this hypothesis predicts force illusions caused by muscle vibration and mediated via changes in both afferent and efferent components of kinesthesia. Healthy subjects performed accurate steady force production tasks by pressing with the four fingers of one hand (the task hand) on individual force sensors with and without visual feedback. At various times during the trials, subjects matched the perceived force using the other hand. High-frequency vibration was applied to one or both of the forearms (over the hand and finger extensors). Without visual feedback, subjects showed a drop in the task hand force, which was significantly smaller under the vibration of that forearm. Force production by the matching hand was consistently higher than that of the task hand. Vibrating one of the forearms affected the matching hand in a manner consistent with the perception of higher magnitude of force produced by the vibrated hand. The findings were consistent between the dominant and nondominant hands. The effects of vibration on both force drift and force mismatching suggest that vibration led to shifts in both signals from proprioceptors and the efferent component of perception, the referent coordinate and/or coactivation command. The observations fit the hypothesis on combined perception of kinematic-kinetic variables with little specificity of different groups of peripheral receptors that all contribute to perception of forces and coordinates. NEW & NOTEWORTHY We show that vibration of hand/finger extensors produces consistent errors in finger force perception. Without visual feedback, finger force drifted to lower values without a drift in the matching force produced by the other hand; hand extensor vibration led to smaller finger force drift. The findings fit the scheme with combined perception of kinematic-kinetic variables and suggest that vibration leads to consistent shifts of the referent coordinate and, possibly, of coactivation command to the effector.

  10. On the Minimal Accuracy Required for Simulating Self-gravitating Systems by Means of Direct N-body Methods

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-01

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  11. Automatic Design of Digital Synthetic Gene Circuits

    PubMed Central

    Marchisio, Mario A.; Stelling, Jörg

    2011-01-01

    De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700

  12. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  13. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  14. The physics of bat biosonar

    NASA Astrophysics Data System (ADS)

    Müller, Rolf

    2011-10-01

    Bats have evolved one of the most capable and at the same time parsimonious sensory systems found in nature. Using active and passive biosonar as a major - and often sufficient - far sense, different bat species are able to master a wide variety of sensory tasks under very dissimilar sets of constraints. Given the limited computational resources of the bat's brain, this performance is unlikely to be explained as the result of brute-force, black-box-style computations. Instead, the animals must rely heavily on in-built physics knowledge in order to ensure that all required information is encoded reliably into the acoustic signals received at the ear drum. To this end, bats can manipulate the emitted and received signals in the physical domain: By diffracting the outgoing and incoming ultrasonic waves with intricate baffle shapes (i.e., noseleaves and outer ears), the animals can generate selectivity filters that are joint functions of space and frequency. To achieve this, bats employ structural features such as resonance cavities and diffracting ridges. In addition, some bat species can dynamically adjust the shape of their selectivity filters through muscular actuation.

  15. A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender

    2018-03-01

    As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.

  16. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  17. Uncovering molecular processes in crystal nucleation and growth by using molecular simulation.

    PubMed

    Anwar, Jamshed; Zahn, Dirk

    2011-02-25

    Exploring nucleation processes by molecular simulation provides a mechanistic understanding at the atomic level and also enables kinetic and thermodynamic quantities to be estimated. However, whilst the potential for modeling crystal nucleation and growth processes is immense, there are specific technical challenges to modeling. In general, rare events, such as nucleation cannot be simulated using a direct "brute force" molecular dynamics approach. The limited time and length scales that are accessible by conventional molecular dynamics simulations have inspired a number of advances to tackle problems that were considered outside the scope of molecular simulation. While general insights and features could be explored from efficient generic models, new methods paved the way to realistic crystal nucleation scenarios. The association of single ions in solvent environments, the mechanisms of motif formation, ripening reactions, and the self-organization of nanocrystals can now be investigated at the molecular level. The analysis of interactions with growth-controlling additives gives a new understanding of functionalized nanocrystals and the precipitation of composite materials. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Clustering biomolecular complexes by residue contacts similarity.

    PubMed

    Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2012-07-01

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.

  19. Towards computational materials design from first principles using alchemical changes and derivatives.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Lilienfeld-Toal, Otto Anatole

    2010-11-01

    The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less

  20. Can genetic algorithms help virus writers reshape their creations and avoid detection?

    NASA Astrophysics Data System (ADS)

    Abu Doush, Iyad; Al-Saleh, Mohammed I.

    2017-11-01

    Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.

  1. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    NASA Astrophysics Data System (ADS)

    Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan

    2017-12-01

    As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  2. Stability analysis of multiple-robot control systems

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz, Kenneth

    1989-01-01

    In a space telerobotic service scenario, cooperative motion and force control of multiple robot arms are of fundamental importance. Three paradigms to study this problem are proposed. They are distinguished by the set of variables used for control design. They are joint torques, arm tip force vectors, and an accelerated generalized coordinate set. Control issues related to each case are discussed. The latter two choices require complete model information, which presents practical modeling, computational, and robustness problems. Therefore, focus is on the joint torque control case to develop relatively model independent motion and internal force control laws. The rigid body assumption allows the motion and force control problems to be independently addressed. By using an energy motivated Lyapunov function, a simple proportional derivative plus gravity compensation type of motion control law is always shown to be stabilizing. The asymptotic convergence of the tracing error to zero requires the use of a generalized coordinate with the contact constraints taken into account. If a non-generalized coordinate is used, only convergence to a steady state manifold can be concluded. For the force control, both feedforward and feedback schemes are analyzed. The feedback control, if proper care has been taken, exhibits better robustness and transient performance.

  3. ASEAN Combined Forces Command

    DTIC Science & Technology

    1992-04-03

    10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) ASEAN...region is a consequence of the stability, coordination and team work of ASEAN. With a reduction of U.S. forces presence in the area, the key to securing...fastest-growing regions in the world today. The stability in the region is a consequence of the stability, coordination and team work of ASEAN. With a

  4. Military Geodesy and Geospace Science Unit One

    DTIC Science & Technology

    1981-02-01

    present section. The Coordinate Systems - The two fundamental planes for the definition of stellar and earth-fixed coordinate sys- tems are the...night are of equal length. The vernal eguinox .is taken as the fundamental direction (x-axis) for the space-fixed system . The plane of the equator is...GEOPHYSICS LABORATORY 4 AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE D HANSCOM AFB, MASSACHUSETTS 01731 81 9 10 038 BLANK PAGES IN THIS DOCUMENT WERE

  5. Optimized coordination of brakes and active steering for a 4WS passenger car.

    PubMed

    Tavasoli, Ali; Naraghi, Mahyar; Shakeri, Heman

    2012-09-01

    Optimum coordination of individual brakes and front/rear steering subsystems is presented. The integrated control strategy consists of three modules. A coordinated high-level control determines the body forces/moment required to achieve vehicle motion objectives. The body forces/moment are allocated to braking and steering subsystems through an intermediate unit, which integrates available subsystems based on phase plane notion in an optimal manner. To this end, an optimization problem including several equality and inequality constraints is defined and solved analytically, such that a real-time implementation can be realized without the use of numeric optimization software. A low-level slip-ratio controller works to generate the desired longitudinal forces at small longitudinal slip-ratios, while averting wheel locking at large slip-ratios. The efficiency of the suggested approach is demonstrated through computer simulations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Probing-error compensation using 5 degree of freedom force/moment sensor for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Cho, Nahm-Gyoo

    2013-09-01

    A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.

  7. Effects of hand configuration on muscle force coordination, co-contraction and concomitant intermuscular coupling during maximal isometric flexion of the fingers.

    PubMed

    Charissou, Camille; Amarantini, David; Baurès, Robin; Berton, Eric; Vigouroux, Laurent

    2017-11-01

    The mechanisms governing the control of musculoskeletal redundancy remain to be fully understood. The hand is highly redundant, and shows different functional role of extensors according to its configuration for a same functional task of finger flexion. Through intermuscular coherence analysis combined with hand musculoskeletal modelling during maximal isometric hand contractions, our aim was to better understand the neural mechanisms underlying the control of muscle force coordination and agonist-antagonist co-contraction. Thirteen participants performed maximal isometric flexions of the fingers in two configurations: power grip (Power) and finger-pressing on a surface (Press). Hand kinematics and force/moment measurements were used as inputs in a musculoskeletal model of the hand to determine muscular tensions and co-contraction. EMG-EMG coherence analysis was performed between wrist and finger flexors and extensor muscle pairs in alpha, beta and gamma frequency bands. Concomitantly with tailored muscle force coordination and increased co-contraction between Press and Power (mean difference: 48.08%; p < 0.05), our results showed muscle-pair-specific modulation of intermuscular coupling, characterized by pair-specific modulation of EMG-EMG coherence between Power and Press (p < 0.05), and a negative linear association between co-contraction and intermuscular coupling for the ECR/FCR agonist-antagonist muscle pair (r = - 0.65; p < 0.05). This study brings new evidence that pair-specific modulation of EMG-EMG coherence is related to modulation of muscle force coordination during hand contractions. Our results highlight the functional importance of intermuscular coupling as a mechanism contributing to the control of muscle force synergies and agonist-antagonist co-contraction.

  8. Coordinated, multi-joint, fatigue-resistant feline stance produced with intrafascicular hind limb nerve stimulation.

    PubMed

    Normann, R A; Dowden, B R; Frankel, M A; Wilder, A M; Hiatt, S D; Ledbetter, N M; Warren, D A; Clark, G A

    2012-04-01

    The production of graceful skeletal movements requires coordinated activation of multiple muscles that produce torques around multiple joints. The work described herein is focused on one such movement, stance, that requires coordinated activation of extensor muscles acting around the hip, knee and ankle joints. The forces evoked in these muscles by external stimulation all have a complex dependence on muscle length and shortening velocities, and some of these muscles are biarticular. In order to recreate sit-to-stand maneuvers in the anesthetized feline, we excited the hind limb musculature using intrafascicular multielectrode stimulation (IFMS) of the muscular branch of the sciatic nerve, the femoral nerve and the main branch of the sciatic nerve. Stimulation was achieved with either acutely or chronically implanted Utah Slanted Electrode Arrays (USEAs) via subsets of electrodes (1) that activated motor units in the extensor muscles of the hip, knee and ankle joints, (2) that were able to evoke large extension forces and (3) that manifested minimal coactivation of the targeted motor units. Three hind limb force-generation strategies were investigated, including sequential activation of independent motor units to increase force, and interleaved or simultaneous IFMS of three sets of six or more USEA electrodes that excited the hip, knee and ankle extensors. All force-generation strategies evoked stance, but the interleaved IFMS strategy also reduced muscle fatigue produced by repeated sit-to-stand maneuvers compared with fatigue produced by simultaneous activation of different motor neuron pools. These results demonstrate the use of interleaved IFMS as a means to recreate coordinated, fatigue-resistant multi-joint muscle forces in the unilateral hind limb. This muscle activation paradigm could provide a promising neuroprosthetic approach for the restoration of sit-to-stand transitions in individuals who are paralyzed by spinal cord injury, stroke or disease.

  9. Coordinated, multi-joint, fatigue-resistant feline stance produced with intrafascicular hind limb nerve stimulation

    NASA Astrophysics Data System (ADS)

    Normann, R. A.; Dowden, B. R.; Frankel, M. A.; Wilder, A. M.; Hiatt, S. D.; Ledbetter, N. M.; Warren, D. A.; Clark, G. A.

    2012-04-01

    The production of graceful skeletal movements requires coordinated activation of multiple muscles that produce torques around multiple joints. The work described herein is focused on one such movement, stance, that requires coordinated activation of extensor muscles acting around the hip, knee and ankle joints. The forces evoked in these muscles by external stimulation all have a complex dependence on muscle length and shortening velocities, and some of these muscles are biarticular. In order to recreate sit-to-stand maneuvers in the anesthetized feline, we excited the hind limb musculature using intrafascicular multielectrode stimulation (IFMS) of the muscular branch of the sciatic nerve, the femoral nerve and the main branch of the sciatic nerve. Stimulation was achieved with either acutely or chronically implanted Utah Slanted Electrode Arrays (USEAs) via subsets of electrodes (1) that activated motor units in the extensor muscles of the hip, knee and ankle joints, (2) that were able to evoke large extension forces and (3) that manifested minimal coactivation of the targeted motor units. Three hind limb force-generation strategies were investigated, including sequential activation of independent motor units to increase force, and interleaved or simultaneous IFMS of three sets of six or more USEA electrodes that excited the hip, knee and ankle extensors. All force-generation strategies evoked stance, but the interleaved IFMS strategy also reduced muscle fatigue produced by repeated sit-to-stand maneuvers compared with fatigue produced by simultaneous activation of different motor neuron pools. These results demonstrate the use of interleaved IFMS as a means to recreate coordinated, fatigue-resistant multi-joint muscle forces in the unilateral hind limb. This muscle activation paradigm could provide a promising neuroprosthetic approach for the restoration of sit-to-stand transitions in individuals who are paralyzed by spinal cord injury, stroke or disease.

  10. The effects of instruction and hand dominance on grip-to-load force coordination in manipulation tasks.

    PubMed

    Jin, Xin; Uygur, Mehmet; Getchell, Nancy; Hall, Susan J; Jaric, Slobodan

    2011-10-31

    The force applied upon a vertically oriented hand-held object could be decomposed into two orthogonal and highly coordinated components: the grip force (GF; the component perpendicular to the hand-object contact area that provides friction) and the load force (LF; the parallel component that can move the object or support the body). The aim of this study was to investigate the underexplored effects of task instruction and hand dominance on GF-LF coordination. Sixteen right-handed subjects performed bimanual manipulation against a horizontally oriented instrumented device under different sets of instructions. The tasks involved exertion of ramp-and-hold or oscillation patterns of LF performed symmetrically with two hands, while the instructions regarding individual actions were either similar (pull with both hands) or dissimilar (pull with one hand and hold with another). The results revealed that the instruction "to pull" leads to higher indices of GF-LF coordination than the instruction "to hold", as evidenced by a lower GF-LF ratio, higher GF-LF coupling, and higher GF modulation. The only effect of hand dominance was a moderate time lag of GF relative to LF changes observed in the non-dominant hand. We conclude that the instructions could play an important role in GF-LF coordination and, therefore, they should be taken into account when exploring or routinely testing hand function. Additionally, the results suggest that the neural control of GF of the non-dominant hand could involve some feedback mechanisms. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  12. Survey on Early Childhood Advisory Councils. NGA Center for Best Practices Backgrounder

    ERIC Educational Resources Information Center

    National Governors Association, 2007

    2007-01-01

    During fall 2007, the NGA Center surveyed states regarding the presence and nature of state early childhood coordinating councils, which may exist as Early Learning Councils, Task Forces, Children's Cabinets, Interagency Coordinating Councils, etc. For brevity, these coordinating entities are referred to below as Early Childhood Advisory Councils…

  13. Review and Evaluation of Hand-Arm Coordinate Systems for Measuring Vibration Exposure, Biodynamic Responses, and Hand Forces.

    PubMed

    Dong, Ren G; Sinsel, Erik W; Welcome, Daniel E; Warren, Christopher; Xu, Xueyan S; McDowell, Thomas W; Wu, John Z

    2015-09-01

    The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study.

  14. Review and Evaluation of Hand–Arm Coordinate Systems for Measuring Vibration Exposure, Biodynamic Responses, and Hand Forces

    PubMed Central

    Dong, Ren G.; Sinsel, Erik W.; Welcome, Daniel E.; Warren, Christopher; Xu, Xueyan S.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    The hand coordinate systems for measuring vibration exposures and biodynamic responses have been standardized, but they are not actually used in many studies. This contradicts the purpose of the standardization. The objectives of this study were to identify the major sources of this problem, and to help define or identify better coordinate systems for the standardization. This study systematically reviewed the principles and definition methods, and evaluated typical hand coordinate systems. This study confirms that, as accelerometers remain the major technology for vibration measurement, it is reasonable to standardize two types of coordinate systems: a tool-based basicentric (BC) system and an anatomically based biodynamic (BD) system. However, these coordinate systems are not well defined in the current standard. Definition of the standard BC system is confusing, and it can be interpreted differently; as a result, it has been inconsistently applied in various standards and studies. The standard hand BD system is defined using the orientation of the third metacarpal bone. It is neither convenient nor defined based on important biological or biodynamic features. This explains why it is rarely used in practice. To resolve these inconsistencies and deficiencies, we proposed a revised method for defining the realistic handle BC system and an alternative method for defining the hand BD system. A fingertip-based BD system for measuring the principal grip force is also proposed based on an important feature of the grip force confirmed in this study. PMID:26929824

  15. Improving the Agility of the NATO Response Force (NRF)

    DTIC Science & Technology

    2010-04-01

    the MCCE and the MIH helicopter task force. As 168 Hauser and Kernic eds., 140-141. 169 NATO...agility through unified efforts. Initiatives such as the MIH helicopter task force and the Movement Coordination Centre Europe (MCCE) are positive

  16. Ultrasonic Attenuation in Normal and Superconducting Indium.

    DTIC Science & Technology

    1980-05-22

    dimension x space coordinate, dislocation displacement dislocation displacement y space coordinate.1z space coordinate x ACKNOWLEDGMENTS The author...The driving force on the dislocation is given by: F=bO (2.7) In general, the dislocation displacement will be a function of three space coordinates...mm diameter, 50 Q impedance coaxial conductors 47 * made of stainless steel and teflon . The cavity button is soldered * directly to the rigid

  17. Enhancing Coordination Among the U.S. Preventive Services Task Force, Agency for Healthcare Research and Quality, and National Institutes of Health.

    PubMed

    Murray, David M; Kaplan, Robert M; Ngo-Metzger, Quyen; Portnoy, Barry; Olkkola, Susanne; Stredrick, Denise; Kuczmarski, Robert J; Goldstein, Amy B; Perl, Harold I; O'Connell, Mary E

    2015-09-01

    This paper focuses on the relationships among the U.S. Preventive Services Task Force (USPSTF); Agency for Healthcare Research and Quality (AHRQ); and NIH. After a brief description of the Task Force, AHRQ, NIH, and an example of how they interact, we describe the steps that have been taken recently by NIH to enhance their coordination. We also discuss several challenges that remain and consider potential remedies that NIH, AHRQ, and investigators can take to provide the USPSTF with the data it needs to make recommendations, particularly those pertaining to behavioral interventions. Published by Elsevier Inc.

  18. Predicting climate change: Uncertainties and prospects for surmounting them

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2008-03-01

    General circulation models (GCMs) are among the most detailed and sophisticated models of natural phenomena in existence. Still, the lack of robust and efficient subgrid-scale parametrizations for GCMs, along with the inherent sensitivity to initial data and the complex nonlinearities involved, present a major and persistent obstacle to narrowing the range of estimates for end-of-century warming. Estimating future changes in the distribution of climatic extrema is even more difficult. Brute-force tuning the large number of GCM parameters does not appear to help reduce the uncertainties. Andronov and Pontryagin (1937) proposed structural stability as a way to evaluate model robustness. Unfortunately, many real-world systems proved to be structurally unstable. We illustrate these concepts with a very simple model for the El Niño--Southern Oscillation (ENSO). Our model is governed by a differential delay equation with a single delay and periodic (seasonal) forcing. Like many of its more or less detailed and realistic precursors, this model exhibits a Devil's staircase. We study the model's structural stability, describe the mechanisms of the observed instabilities, and connect our findings to ENSO phenomenology. In the model's phase-parameter space, regions of smooth dependence on parameters alternate with rough, fractal ones. We then apply the tools of random dynamical systems and stochastic structural stability to the circle map and a torus map. The effect of noise with compact support on these maps is fairly intuitive: it is the most robust structures in phase-parameter space that survive the smoothing introduced by the noise. The nature of the stochastic forcing matters, thus suggesting that certain types of stochastic parametrizations might be better than others in achieving GCM robustness. This talk represents joint work with M. Chekroun, E. Simonnet and I. Zaliapin.

  19. Chemical reactions induced by oscillating external fields in weak thermal environments

    NASA Astrophysics Data System (ADS)

    Craven, Galen T.; Bartsch, Thomas; Hernandez, Rigoberto

    2015-02-01

    Chemical reaction rates must increasingly be determined in systems that evolve under the control of external stimuli. In these systems, when a reactant population is induced to cross an energy barrier through forcing from a temporally varying external field, the transition state that the reaction must pass through during the transformation from reactant to product is no longer a fixed geometric structure, but is instead time-dependent. For a periodically forced model reaction, we develop a recrossing-free dividing surface that is attached to a transition state trajectory [T. Bartsch, R. Hernandez, and T. Uzer, Phys. Rev. Lett. 95, 058301 (2005)]. We have previously shown that for single-mode sinusoidal driving, the stability of the time-varying transition state directly determines the reaction rate [G. T. Craven, T. Bartsch, and R. Hernandez, J. Chem. Phys. 141, 041106 (2014)]. Here, we extend our previous work to the case of multi-mode driving waveforms. Excellent agreement is observed between the rates predicted by stability analysis and rates obtained through numerical calculation of the reactive flux. We also show that the optimal dividing surface and the resulting reaction rate for a reactive system driven by weak thermal noise can be approximated well using the transition state geometry of the underlying deterministic system. This agreement persists as long as the thermal driving strength is less than the order of that of the periodic driving. The power of this result is its simplicity. The surprising accuracy of the time-dependent noise-free geometry for obtaining transition state theory rates in chemical reactions driven by periodic fields reveals the dynamics without requiring the cost of brute-force calculations.

  20. The force synergy of human digits in static and dynamic cylindrical grasps.

    PubMed

    Kuo, Li-Chieh; Chen, Shih-Wei; Lin, Chien-Ju; Lin, Wei-Jr; Lin, Sheng-Che; Su, Fong-Chin

    2013-01-01

    This study explores the force synergy of human digits in both static and dynamic cylindrical grasping conditions. The patterns of digit force distribution, error compensation, and the relationships among digit forces are examined to quantify the synergetic patterns and coordination of multi-finger movements. This study recruited 24 healthy participants to perform cylindrical grasps using a glass simulator under normal grasping and one-finger restricted conditions. Parameters such as the grasping force, patterns of digit force distribution, and the force coefficient of variation are determined. Correlation coefficients and principal component analysis (PCA) are used to estimate the synergy strength under the dynamic grasping condition. Specific distribution patterns of digit forces are identified for various conditions. The compensation of adjacent fingers for the force in the normal direction of an absent finger agrees with the principle of error compensation. For digit forces in anti-gravity directions, the distribution patterns vary significantly by participant. The forces exerted by the thumb are closely related to those exerted by other fingers under all conditions. The index-middle and middle-ring finger pairs demonstrate a significant relationship. The PCA results show that the normal forces of digits are highly coordinated. This study reveals that normal force synergy exists under both static and dynamic cylindrical grasping conditions.

  1. The Force Synergy of Human Digits in Static and Dynamic Cylindrical Grasps

    PubMed Central

    Kuo, Li-Chieh; Chen, Shih-Wei; Lin, Chien-Ju; Lin, Wei-Jr; Lin, Sheng-Che; Su, Fong-Chin

    2013-01-01

    This study explores the force synergy of human digits in both static and dynamic cylindrical grasping conditions. The patterns of digit force distribution, error compensation, and the relationships among digit forces are examined to quantify the synergetic patterns and coordination of multi-finger movements. This study recruited 24 healthy participants to perform cylindrical grasps using a glass simulator under normal grasping and one-finger restricted conditions. Parameters such as the grasping force, patterns of digit force distribution, and the force coefficient of variation are determined. Correlation coefficients and principal component analysis (PCA) are used to estimate the synergy strength under the dynamic grasping condition. Specific distribution patterns of digit forces are identified for various conditions. The compensation of adjacent fingers for the force in the normal direction of an absent finger agrees with the principle of error compensation. For digit forces in anti-gravity directions, the distribution patterns vary significantly by participant. The forces exerted by the thumb are closely related to those exerted by other fingers under all conditions. The index-middle and middle-ring finger pairs demonstrate a significant relationship. The PCA results show that the normal forces of digits are highly coordinated. This study reveals that normal force synergy exists under both static and dynamic cylindrical grasping conditions. PMID:23544151

  2. Loud and Clear

    ERIC Educational Resources Information Center

    Meier, Deborah

    2009-01-01

    In this article, the author talks about Ted Sizer and describes him as a "schoolman," a Mr. Chips figure with all the romance that surrounded that image. Accustomed to models of brute power, parents, teachers, bureaucrats, and even politicians were attracted to his message of common decency. There's a way of talking about, and to, school people…

  3. Individual Choice and Unequal Participation in Higher Education

    ERIC Educational Resources Information Center

    Voigt, Kristin

    2007-01-01

    Does the unequal participation of non-traditional students in higher education indicate social injustice, even if it can be traced back to individuals' choices? Drawing on luck egalitarian approaches,this article suggests that an answer to this question must take into account the effects of unequal brute luck on educational choices. I use a…

  4. One Digit Interruption: The Altered Force Patterns during Functionally Cylindrical Grasping Tasks in Patients with Trigger Digits

    PubMed Central

    Chen, Po-Tsun; Lin, Chien-Ju; Jou, I-Ming; Chieh, Hsiao-Feng; Su, Fong-Chin; Kuo, Li-Chieh

    2013-01-01

    Most trigger digit (TD) patients complain that they have problems using their hand in daily or occupational tasks due to single or multiple digits being affected. Unfortunately, clinicians do not know much about how this disease affects the subtle force coordination among digits during manipulation. Thus, this study examined the differences in force patterns during cylindrical grasp between TD and healthy subjects. Forty-two TD patients with single digit involvement were included and sorted into four groups based on the involved digits, including thumb, index, middle and ring fingers. Twelve healthy subjects volunteered as healthy controls. Two testing tasks, holding and drinking, were performed by natural grasping with minimal forces. The relations between the force of the thumb and each finger were examined by Pearson correlation coefficients. The force amount and contribution of each digit were compared between healthy controls and each TD group by the independent t test. The results showed all TD groups demonstrated altered correlation patterns of the thumb relative to each finger. Larger forces and higher contributions of the index finger were found during holding by patients with index finger involved, and also during drinking by patients with affected thumb and with affected middle finger. Although no triggering symptom occurred during grasping, the patients showed altered force patterns which may be related to the role of the affected digit in natural grasping function. In conclusion, even if only one digit was affected, the subtle force coordination of all the digits was altered during simple tasks among the TD patients. This study provides the information for the future studies to further comprehend the possible injuries secondary to the altered finger coordination and also to adopt suitable treatment strategies. PMID:24391799

  5. One digit interruption: the altered force patterns during functionally cylindrical grasping tasks in patients with trigger digits.

    PubMed

    Chen, Po-Tsun; Lin, Chien-Ju; Jou, I-Ming; Chieh, Hsiao-Feng; Su, Fong-Chin; Kuo, Li-Chieh

    2013-01-01

    Most trigger digit (TD) patients complain that they have problems using their hand in daily or occupational tasks due to single or multiple digits being affected. Unfortunately, clinicians do not know much about how this disease affects the subtle force coordination among digits during manipulation. Thus, this study examined the differences in force patterns during cylindrical grasp between TD and healthy subjects. Forty-two TD patients with single digit involvement were included and sorted into four groups based on the involved digits, including thumb, index, middle and ring fingers. Twelve healthy subjects volunteered as healthy controls. Two testing tasks, holding and drinking, were performed by natural grasping with minimal forces. The relations between the force of the thumb and each finger were examined by Pearson correlation coefficients. The force amount and contribution of each digit were compared between healthy controls and each TD group by the independent t test. The results showed all TD groups demonstrated altered correlation patterns of the thumb relative to each finger. Larger forces and higher contributions of the index finger were found during holding by patients with index finger involved, and also during drinking by patients with affected thumb and with affected middle finger. Although no triggering symptom occurred during grasping, the patients showed altered force patterns which may be related to the role of the affected digit in natural grasping function. In conclusion, even if only one digit was affected, the subtle force coordination of all the digits was altered during simple tasks among the TD patients. This study provides the information for the future studies to further comprehend the possible injuries secondary to the altered finger coordination and also to adopt suitable treatment strategies.

  6. The Role of Molecular Dynamics Potential of Mean Force Calculations in the Investigation of Enzyme Catalysis.

    PubMed

    Yang, Y; Pan, L; Lightstone, F C; Merz, K M

    2016-01-01

    The potential of mean force simulations, widely applied in Monte Carlo or molecular dynamics simulations, are useful tools to examine the free energy variation as a function of one or more specific reaction coordinate(s) for a given system. Implementation of the potential of mean force in the simulations of biological processes, such as enzyme catalysis, can help overcome the difficulties of sampling specific regions on the energy landscape and provide useful insights to understand the catalytic mechanism. The potential of mean force simulations usually require many, possibly parallelizable, short simulations instead of a few extremely long simulations and, therefore, are fairly manageable for most research facilities. In this chapter, we provide detailed protocols for applying the potential of mean force simulations to investigate enzymatic mechanisms for several different enzyme systems. © 2016 Elsevier Inc. All rights reserved.

  7. Effects of walking speed on asymmetry and bilateral coordination of gait

    PubMed Central

    Plotnik, Meir; Bartsch, Ronny P.; Zeev, Aviva; Giladi, Nir; Hausdorff, Jeffery M.

    2013-01-01

    The mechanisms regulating the bilateral coordination of gait in humans are largely unknown. Our objective was to study how bilateral coordination changes as a result of gait speed modifications during over ground walking. 15 young adults wore force sensitive insoles that measured vertical forces used to determine the timing of the gait cycle events under three walking conditions (i.e., usual-walking, fast and slow). Ground reaction force impact (GRFI) associated with heel-strikes was also quantified, representing the potential contribution of sensory feedback to the regulation of gait. Gait asymmetry (GA) was quantified based on the differences between right and left swing times and the bilateral coordination of gait was assessed using the phase coordination index (PCI), a metric that quantifies the consistency and accuracy of the anti-phase stepping pattern. GA was preserved in the three different gait speeds. PCI was higher (reduced coordination) in the slow gait condition, compared to usual-walking (3.51% vs. 2.47%, respectively, p=0.002), but was not significantly affected in the fast condition. GRFI values were lower in the slow walking as compared to usual-walking and higher in the fast walking condition (p<0.001). Stepwise regression revealed that slowed gait related changes in PCI were not associated with the slowed gait related changes in GRFI. The present findings suggest that left-right anti-phase stepping is similar in normal and fast walking, but altered during slowed walking. This behavior might reflect a relative increase in attention resources required to regulate a slow gait speed, consistent with the possibility that cortical function and supraspinal input influences the bilateral coordination of gait. PMID:23680424

  8. Toledo Area Private Industry Council SDA #9. Welfare Coordination Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Area Private Industry Council, OH.

    The Toledo Area Welfare Coordination Task Force, coordinated by the Private Industry Council and funded by the Job Training Partnership Act, brought together more than 20 community leaders representing private and public organizations that have a role to play in implementing the Job Opportunities and Basic Skills (JOBS) program in Lucas and Wood…

  9. 78 FR 27126 - East Bay, St. Andrews Bay and the Gulf of Mexico at Tyndall Air Force Base, Florida; Restricted...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... a line connecting the following coordinates: Commencing from the mean high water line at... defined at 33 CFR part 329 within the area bounded by a line connecting the following coordinates... bounded by a line connecting the following coordinates: Commencing from the mean high water line at...

  10. Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.

    PubMed

    Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai

    2014-07-02

    We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.

  11. Computing the binding affinity of a ligand buried deep inside a protein with the hybrid steered molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006

    Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less

  12. Biomechanics and muscle coordination of human walking. Part I: introduction to concepts, power transfer, dynamics and simulations.

    PubMed

    Zajac, Felix E; Neptune, Richard R; Kautz, Steven A

    2002-12-01

    Current understanding of how muscles coordinate walking in humans is derived from analyses of body motion, ground reaction force and EMG measurements. This is Part I of a two-part review that emphasizes how muscle-driven dynamics-based simulations assist in the understanding of individual muscle function in walking, especially the causal relationships between muscle force generation and walking kinematics and kinetics. Part I reviews the strengths and limitations of Newton-Euler inverse dynamics and dynamical simulations, including the ability of each to find the contributions of individual muscles to the acceleration/deceleration of the body segments. We caution against using the concept of biarticular muscles transferring power from one joint to another to infer muscle coordination principles because energy flow among segments, even the adjacent segments associated with the joints, cannot be inferred from computation of joint powers and segmental angular velocities alone. Rather, we encourage the use of dynamical simulations to perform muscle-induced segmental acceleration and power analyses. Such analyses have shown that the exchange of segmental energy caused by the forces or accelerations induced by a muscle can be fundamentally invariant to whether the muscle is shortening, lengthening, or neither. How simulation analyses lead to understanding the coordination of seated pedaling, rather than walking, is discussed in this first part because the dynamics of pedaling are much simpler, allowing important concepts to be revealed. We elucidate how energy produced by muscles is delivered to the crank through the synergistic action of other non-energy producing muscles; specifically, that a major function performed by a muscle arises from the instantaneous segmental accelerations and redistribution of segmental energy throughout the body caused by its force generation. Part II reviews how dynamical simulations provide insight into muscle coordination of walking.

  13. 76 FR 52318 - U.S. Coral Reef Task Force Public Meeting and Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration U.S. Coral Reef Task Force... of the U.S. Coral Reef Task Force. The meeting will be held in Ft. Lauderdale, Florida. This meeting, the 26th bi-annual meeting of the U.S. Coral Reef Task Force, provides a forum for coordinated...

  14. Prediction of Bubble Diameter at Detachment from a Wall Orifice in Liquid Cross Flow Under Reduced and Normal Gravity Conditions

    NASA Technical Reports Server (NTRS)

    Nahra, Henry K.; Kamotani, Y.

    2003-01-01

    Bubble formation and detachment is an integral part of the two-phase flow science. The objective of the present work is to theoretically investigate the effects of liquid cross-flow velocity, gas flow rate embodied in the momentum flux force, and orifice diameter on bubble formation in a wall-bubble injection configuration. A two-dimensional one-stage theoretical model based on a global force balance on the bubble evolving from a wall orifice in a cross liquid flow is presented in this work. In this model, relevant forces acting on the evolving bubble are expressed in terms of the bubble center of mass coordinates and solved simultaneously. Relevant forces in low gravity included the momentum flux, shear-lift, surface tension, drag and inertia forces. Under normal gravity conditions, the buoyancy force, which is dominant under such conditions, can be added to the force balance. Two detachment criteria were applicable depending on the gas to liquid momentum force ratio. For low ratios, the time when the bubble acceleration in the direction of the detachment angle is greater or equal to zero is calculated from the bubble x and y coordinates. This time is taken as the time at which all the detaching forces that are acting on the bubble are greater or equal to the attaching forces. For high gas to liquid momentum force ratios, the time at which the y coordinate less the bubble radius equals zero is calculated. The bubble diameter is evaluated at this time as the diameter at detachment from the fact that the bubble volume is simply given by the product of the gas flow rate and time elapsed. Comparison of the model s predictions was also made with predictions from a two-dimensional normal gravity model based on Kumar-Kuloor formulation and such a comparison is presented in this work.

  15. Coordination of contractility, adhesion and flow in migrating Physarum amoebae.

    PubMed

    Lewis, Owen L; Zhang, Shun; Guy, Robert D; del Álamo, Juan C

    2015-05-06

    This work examines the relationship between spatio-temporal coordination of intracellular flow and traction stress and the speed of amoeboid locomotion of microplasmodia of Physarum polycephalum. We simultaneously perform particle image velocimetry and traction stress microscopy to measure the velocity of cytoplasmic flow and the stresses applied to the substrate by migrating Physarum microamoebae. In parallel, we develop a mathematical model of a motile cell which includes forces from the viscous cytosol, a poro-elastic, contractile cytoskeleton and adhesive interactions with the substrate. Our experiments show that flow and traction stress exhibit back-to-front-directed waves with a distinct phase difference. The model demonstrates that the direction and speed of locomotion are determined by this coordination between contraction, flow and adhesion. Using the model, we identify forms of coordination that generate model predictions consistent with experiments. We demonstrate that this coordination produces near optimal migration speed and is insensitive to heterogeneity in substrate adhesiveness. While it is generally thought that amoeboid motility is robust to changes in extracellular geometry and the nature of extracellular adhesion, our results demonstrate that coordination of adhesive forces is essential to producing robust migration. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Practical approach to subject-specific estimation of knee joint contact force.

    PubMed

    Knarr, Brian A; Higginson, Jill S

    2015-08-20

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Practical approach to subject-specific estimation of knee joint contact force

    PubMed Central

    Knarr, Brian A.; Higginson, Jill S.

    2015-01-01

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546

  18. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance.

    PubMed

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L

    2017-02-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.

  19. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance

    PubMed Central

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549

  20. Confronting the Neo-Liberal Brute: Reflections of a Higher Education Middle-Level Manager

    ERIC Educational Resources Information Center

    Maistry, S. M.

    2012-01-01

    The higher education scenario in South Africa is fraught with tensions and contradictions. Publicly funded Higher Education Institutions (HEIs) face a particular dilemma. They are expected to fulfill a social mandate which requires a considered response to the needs of the communities in which they are located while simultaneously aspiring for…

  1. Functional connectivity in the neuromuscular system underlying bimanual coordination

    PubMed Central

    de Vries, Ingmar E. J.; Daffertshofer, Andreas; Stegeman, Dick F.

    2016-01-01

    Neural synchrony has been suggested as a mechanism for integrating distributed sensorimotor systems involved in coordinated movement. To test the role of corticomuscular and intermuscular coherence in bimanual coordination, we experimentally manipulated the degree of coordination between hand muscles by varying the sensitivity of the visual feedback to differences in bilateral force. In 16 healthy participants, cortical activity was measured using EEG and muscle activity of the flexor pollicis brevis of both hands using high-density electromyography (HDsEMG). Using the uncontrolled manifold framework, coordination between bilateral forces was quantified by the synergy index RV in the time and frequency domain. Functional connectivity was assessed using corticomuscular coherence between muscle activity and cortical source activity and intermuscular coherence between bilateral EMG activity. The synergy index increased in the high coordination condition. RV was higher in the high coordination condition in frequencies between 0 and 0.5 Hz; for the 0.5- to 2-Hz frequency band, this pattern was inverted. Corticomuscular coherence in the beta band (16–30 Hz) was maximal in the contralateral motor cortex and was reduced in the high coordination condition. In contrast, intermuscular coherence was observed at 5–12 Hz and increased with bimanual coordination. Within-subject comparisons revealed a negative correlation between RV and corticomuscular coherence and a positive correlation between RV and intermuscular coherence. Our findings suggest two distinct neural pathways: 1) corticomuscular coherence reflects direct corticospinal projections involved in controlling individual muscles; and 2) intermuscular coherence reflects diverging pathways involved in the coordination of multiple muscles. PMID:27628205

  2. Effectiveness of Healthcare Coordination in Patients with Chronic Respiratory Diseases.

    PubMed

    Kurpas, Donata; Szwamel, Katarzyna; Lenarcik, Dorota; Guzek, Marika; Prusaczyk, Artur; Żuk, Paweł; Michalowska, Jolanta; Grzeda, Agnieszka; Mroczek, Bożena

    2017-08-12

    Coordination of healthcare effectively prevents exacerbations and reduces the number of hospitalizations, emergency visits, and the mortality rate in patients with chronic respiratory diseases. The purpose of this study was to determine clinical effectiveness of ambulatory healthcare coordination in chronic respiratory patients and its effect on the level of healthcare services as an indicator of direct medical costs. We conducted a retrospective health record survey, using an online database of 550 patients with chronic respiratory diseases. There were decreases in breathing rate, heart rate, and the number of cigarettes smoked per day, and forced vital capacity (FVC) and forced expired volume in 1 s (FEV1) increased after the implementation of the coordinated healthcare structure. These benefits were accompanied by increases in the number of visits to the pulmonary outpatient clinic (p < 0.001), diagnostic costs (p < 0.001), and referrals to other outpatient clinics (p < 0.003) and hospitals (p < 0.001). The advantageous effects of healthcare coordination on clinical status of respiratory patients above outlined persisted over a 3-year period being reviewed.

  3. Multi-muscle synergies in an unusual postural task: quick shear force production.

    PubMed

    Robert, Thomas; Zatsiorsky, Vladimir M; Latash, Mark L

    2008-05-01

    We considered a hypothetical two-level hierarchy participating in the control of vertical posture. The framework of the uncontrolled manifold (UCM) hypothesis was used to explore the muscle groupings (M-modes) and multi-M-mode synergies involved in the stabilization of a time profile of the shear force in the anterior-posterior direction. Standing subjects were asked to produce pulses of shear force into a target using visual feedback while trying to minimize the shift of the center of pressure (COP). Principal component analysis applied to integrated muscle activation indices identified three M-modes. The composition of the M-modes was similar across subjects and the two directions of the shear force pulse. It differed from the composition of M-modes described in earlier studies of more natural actions associated with large COP shifts. Further, the trial-to-trial M-mode variance was partitioned into two components: one component that does not affect a particular performance variable (V(UCM)), and its orthogonal component (V(ORT)). We argued that there is a multi-M-mode synergy stabilizing this particular performance variable if V(UCM) is higher than V(ORT). Overall, we found a multi-M-mode synergy stabilizing both shear force and COP coordinate. For the shear force, this synergy was strong for the backward force pulses and nonsignificant for the forward pulses. An opposite result was found for the COP coordinate: the synergy was stronger for the forward force pulses. The study shows that M-mode composition can change in a task-specific way and that two different performance variables can be stabilized using the same set of elemental variables (M-modes). The different dependences of the ΔV indices for the shear force and COP coordinate on the force pulse direction supports applicability of the principle of superposition (separate controllers for different performance variables) to the control of different mechanical variables in postural tasks. The M-mode composition allows a natural mechanical interpretation.

  4. Correlations of psycho-physiological parameters influencing the physical fitness of aged women.

    PubMed

    Bretz, É; Kóbor-Nyakas, D É; Bretz, K J; Hrehuss, N; Radák, Z; Nyakas, Csaba

    2014-12-01

    Regular assessment of psycho-physiological parameters in aged subjects helps to clarify physical and mental conditions which are important in the prevention of health-endangering events to assure a healthy aging. Thirty older care female residents consented voluntarily to participate in the study. The somatic and psycho-physiological parameters recorded were handgrip force, disjunctive reaction time, balance control and whole body movement coordination, the electrocardiogram and heart rate variability. Significant correlations were found between (a) reaction time and balance control efficiency (r = -0.567, p < 0.009), (b) reaction time and movement coordination accuracy (r = -0.453, p < 0.045), (c) cardiac state and movement coordination accuracy (r = 0.545, p < 0.016), (d) cardiac stress and cardiac state (r = -0.495, p < 0.031), and (e) cardiac stress and force (r = -0.822, p < 0.045). In conclusion, for the aim of establishing basic battery tests for assessing psycho-physiological condition of physical fitness our results emphasize the importance of systematic physical activity, endurance and strength training supporting muscle force, balance control and whole-body movement coordination, in addition to improving the cardiac stress index level. The strong interrelation among these parameters allows the drawing of a more complete view regarding the health condition of aged individuals.

  5. Measurements of Aerodynamic Damping in the MIT Transonic Rotor

    NASA Technical Reports Server (NTRS)

    Crawley, E. F.

    1981-01-01

    A method was developed and demonstrated for the direct measurement of aerodynamic forcing and aerodynamic damping of a transonic compressor. The method is based on the inverse solution of the structural dynamic equations of motion of the blade disk system in order to determine the forces acting on the system. The disturbing and damping forces acting on a given blade are determined if the equations of motion are expressed in individual blade coordinates. If the structural dynamic equations are transformed to multiblade coordinates, the damping can be measured for blade disk modes, and related to a reduced frequency and interblade phase angle. In order to measure the aerodynamic damping in this way, the free response to a known excitation is studied.

  6. Evolution of Logistics: Supporting NATO’s Multinational Corps

    DTIC Science & Technology

    1991-02-15

    French Munitions Council was formed to coordinate the pooling of common use items (ammunition, 2 petroleum products, and food .) Problems remained...recipient countries provided the U.S. forces food , housing, transportation, training facilities, etc. This was particularly true for the 24 U.S. forces in...Commonwealth forces ’with perishable foods and petroleum products, and the ROK forces with war materiel. 3 4 The forces of each nation arrived in Korea

  7. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  8. Asynchronous partial contact motion due to internal resonance in multiple degree-of-freedom rotordynamics

    NASA Astrophysics Data System (ADS)

    Shaw, A. D.; Champneys, A. R.; Friswell, M. I.

    2016-08-01

    Sudden onset of violent chattering or whirling rotor-stator contact motion in rotational machines can cause significant damage in many industrial applications. It is shown that internal resonance can lead to the onset of bouncing-type partial contact motion away from primary resonances. These partial contact limit cycles can involve any two modes of an arbitrarily high degree-of-freedom system, and can be seen as an extension of a synchronization condition previously reported for a single disc system. The synchronization formula predicts multiple drivespeeds, corresponding to different forms of mode-locked bouncing orbits. These results are backed up by a brute-force bifurcation analysis which reveals numerical existence of the corresponding family of bouncing orbits at supercritical drivespeeds, provided the damping is sufficiently low. The numerics reveal many overlapping families of solutions, which leads to significant multi-stability of the response at given drive speeds. Further, secondary bifurcations can also occur within each family, altering the nature of the response and ultimately leading to chaos. It is illustrated how stiffness and damping of the stator have a large effect on the number and nature of the partial contact solutions, illustrating the extreme sensitivity that would be observed in practice.

  9. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  10. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  11. Assessment of short-term PM2.5-related mortality due to different emission sources in the Yangtze River Delta, China

    NASA Astrophysics Data System (ADS)

    Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming

    2015-12-01

    Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.

  12. Large-scale detection of repetitions

    PubMed Central

    Smyth, W. F.

    2014-01-01

    Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872

  13. Towards identification of relevant variables in the observed aerosol optical depth bias between MODIS and AERONET observations

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Lary, D. J.; Gencaga, D.; Albayrak, A.; Wei, J.

    2013-08-01

    Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program.

  14. Security and matching of partial fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.

    2004-08-01

    Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).

  15. Saturn Apollo Program

    NASA Image and Video Library

    1967-07-28

    This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes

  16. Saturn Apollo Program

    NASA Image and Video Library

    1965-05-01

    This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes.

  17. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  18. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  19. Artificial immune system algorithm in VLSI circuit configuration

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  20. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    NASA Astrophysics Data System (ADS)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  1. Mapping PDB chains to UniProtKB entries.

    PubMed

    Martin, Andrew C R

    2005-12-01

    UniProtKB/SwissProt is the main resource for detailed annotations of protein sequences. This database provides a jumping-off point to many other resources through the links it provides. Among others, these include other primary databases, secondary databases, the Gene Ontology and OMIM. While a large number of links are provided to Protein Data Bank (PDB) files, obtaining a regularly updated mapping between UniProtKB entries and PDB entries at the chain or residue level is not straightforward. In particular, there is no regularly updated resource which allows a UniProtKB/SwissProt entry to be identified for a given residue of a PDB file. We have created a completely automatically maintained database which maps PDB residues to residues in UniProtKB/SwissProt and UniProtKB/trEMBL entries. The protocol uses links from PDB to UniProtKB, from UniProtKB to PDB and a brute-force sequence scan to resolve PDB chains for which no annotated link is available. Finally the sequences from PDB and UniProtKB are aligned to obtain a residue-level mapping. The resource may be queried interactively or downloaded from http://www.bioinf.org.uk/pdbsws/.

  2. Proteinortho: Detection of (Co-)orthologs in large-scale analysis

    PubMed Central

    2011-01-01

    Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987

  3. Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca

    2017-06-01

    Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.

  4. Thirty years since diffuse sound reflection by maximum length

    NASA Astrophysics Data System (ADS)

    Cox, Trevor J.; D'Antonio, Peter

    2005-09-01

    This year celebrates the 30th anniversary of Schroeder's seminal paper on sound scattering from maximum length sequences. This paper, along with Schroeder's subsequent publication on quadratic residue diffusers, broke new ground, because they contained simple recipes for designing diffusers with known acoustic performance. So, what has happened in the intervening years? As with most areas of engineering, the room acoustic diffuser has been greatly influenced by the rise of digital computing technologies. Numerical methods have become much more powerful, and this has enabled predictions of surface scattering to greater accuracy and for larger scale surfaces than previously possible. Architecture has also gone through a revolution where the forms of buildings have become more extreme and sculptural. Acoustic diffuser designs have had to keep pace with this to produce shapes and forms that are desirable to architects. To achieve this, design methodologies have moved away from Schroeder's simple equations to brute force optimization algorithms. This paper will look back at the past development of the modern diffuser, explaining how the principles of diffuser design have been devised and revised over the decades. The paper will also look at the present state-of-the art, and dreams for the future.

  5. Expert system for on-board satellite scheduling and control

    NASA Technical Reports Server (NTRS)

    Barry, John M.; Sary, Charisse

    1988-01-01

    An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.

  6. Using listener-based perceptual features as intermediate representations in music information retrieval.

    PubMed

    Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders

    2014-10-01

    The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.

  7. Ordering effects of conjugate thermal fields in simulations of molecular liquids: Carbon dioxide and water

    NASA Astrophysics Data System (ADS)

    Dittmar, Harro R.; Kusalik, Peter G.

    2016-10-01

    As shown previously, it is possible to apply configurational and kinetic thermostats simultaneously in order to induce a steady thermal flux in molecular dynamics simulations of many-particle systems. This flux appears to promote motion along potential gradients and can be utilized to enhance the sampling of ordered arrangements, i.e., it can facilitate the formation of a critical nucleus. Here we demonstrate that the same approach can be applied to molecular systems, and report a significant enhancement of the homogeneous crystal nucleation of a carbon dioxide (EPM2 model) system. Quantitative ordering effects and reduction of the particle mobilities were observed in water (TIP4P-2005 model) and carbon dioxide systems. The enhancement of the crystal nucleation of carbon dioxide was achieved with relatively small conjugate thermal fields. The effect is many orders of magnitude bigger at milder supercooling, where the forward flux sampling method was employed, than at a lower temperature that enabled brute force simulations of nucleation events. The behaviour exhibited implies that the effective free energy barrier of nucleation must have been reduced by the conjugate thermal field in line with our interpretation of previous results for atomic systems.

  8. Rational design of DNA sequences for nanotechnology, microarrays and molecular computers using Eulerian graphs.

    PubMed

    Pancoska, Petr; Moravek, Zdenek; Moll, Ute M

    2004-01-01

    Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.

  9. Full counting statistics of conductance for disordered systems

    NASA Astrophysics Data System (ADS)

    Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian

    2017-09-01

    Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.

  10. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less

  11. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  12. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  13. A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns.

    PubMed

    Xu, W; LeBeau, J M

    2018-05-01

    We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of  ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Robust computation of dipole electromagnetic fields in arbitrarily anisotropic, planar-stratified environments.

    PubMed

    Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay

    2014-01-01

    We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.

  15. A new feedback image encryption scheme based on perturbation with dynamical compound chaotic sequence cipher generator

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Cui, Minggen; Wang, Zhu

    2009-07-01

    The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.

  16. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  17. Accelerated Time-Domain Modeling of Electromagnetic Pulse Excitation of Finite-Length Dissipative Conductors over a Ground Plane via Function Fitting and Recursive Convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh

    In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less

  18. A Site Density Functional Theory for Water: Application to Solvation of Amino Acid Side Chains.

    PubMed

    Liu, Yu; Zhao, Shuangliang; Wu, Jianzhong

    2013-04-09

    We report a site density functional theory (SDFT) based on the conventional atomistic models of water and the universality ansatz of the bridge functional. The excess Helmholtz energy functional is formulated in terms of a quadratic expansion with respect to the local density deviation from that of a uniform system and a universal functional for all higher-order terms approximated by that of a reference hard-sphere system. With the atomistic pair direct correlation functions of the uniform system calculated from MD simulation and an analytical expression for the bridge functional from the modified fundamental measure theory, the SDFT can be used to predict the structure and thermodynamic properties of water under inhomogeneous conditions with a computational cost negligible in comparison to that of brute-force simulations. The numerical performance of the SDFT has been demonstrated with the predictions of the solvation free energies of 15 molecular analogs of amino acid side chains in water represented by SPC/E, SPC, and TIP3P models. For theTIP3P model, a comparison of the theoretical predictions with MD simulation and experimental data shows agreement within 0.64 and 1.09 kcal/mol on average, respectively.

  19. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  20. Multibeam Gpu Transient Pipeline for the Medicina BEST-2 Array

    NASA Astrophysics Data System (ADS)

    Magro, A.; Hickish, J.; Adami, K. Z.

    2013-09-01

    Radio transient discovery using next generation radio telescopes will pose several digital signal processing and data transfer challenges, requiring specialized high-performance backends. Several accelerator technologies are being considered as prototyping platforms, including Graphics Processing Units (GPUs). In this paper we present a real-time pipeline prototype capable of processing multiple beams concurrently, performing Radio Frequency Interference (RFI) rejection through thresholding, correcting for the delay in signal arrival times across the frequency band using brute-force dedispersion, event detection and clustering, and finally candidate filtering, with the capability of persisting data buffers containing interesting signals to disk. This setup was deployed at the BEST-2 SKA pathfinder in Medicina, Italy, where several benchmarks and test observations of astrophysical transients were conducted. These tests show that on the deployed hardware eight 20 MHz beams can be processed simultaneously for 640 Dispersion Measure (DM) values. Furthermore, the clustering and candidate filtering algorithms employed prove to be good candidates for online event detection techniques. The number of beams which can be processed increases proportionally to the number of servers deployed and number of GPUs, making it a viable architecture for current and future radio telescopes.

  1. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    NASA Astrophysics Data System (ADS)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  2. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  3. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  4. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  5. Report of the Task Force on Human Rights.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC.

    The NEA Task Force was instructed to "recommend to the Executive Committee a structure and program for the coordination and expansion of the human rights activities of the NEA and of the departments, divisions, commissions, and committees." Their recommendations and a discussion of the forces in American society that make them necessary comprise…

  6. Final Environmental Assessment for the Proposed Antenna Construction at the Existing ADF Remote Terminal Facility, Buckley, Air Force Base, Colorado

    DTIC Science & Technology

    2004-09-01

    November. Buckley Air Force Base (BAFB). 2003a. Electronic mail correspondence from Janet Wade, Base Population. July . Buckley Air Force Base...please contact Amy Pallante , our Section 106 Compliance Coordinator, at (303) 866-4678. Sincerely, ~~=i~::t State Historic Preservation Officer

  7. Child Protection, Public Services and the Chimera of Market Force Efficiency.

    ERIC Educational Resources Information Center

    Barker, Richard W.

    1996-01-01

    Describes child protection systems in England and ongoing changes in their services. Considers effects of a market force approach on the organization of child protection services in relation to coordination versus fragmentation and profit versus professionalism. Concludes that the idea that a market force approach to child protection will lead to…

  8. Nonlinear normal vibration modes in the dynamics of nonlinear elastic systems

    NASA Astrophysics Data System (ADS)

    Mikhlin, Yu V.; Perepelkin, N. V.; Klimenko, A. A.; Harutyunyan, E.

    2012-08-01

    Nonlinear normal modes (NNMs) are a generalization of the linear normal vibrations. By the Kauderer-Rosenberg concept in the regime of the NNM all position coordinates are single-values functions of some selected position coordinate. By the Shaw-Pierre concept, the NNM is such a regime when all generalized coordinates and velocities are univalent functions of a couple of dominant (active) phase variables. The NNMs approach is used in some applied problems. In particular, the Kauderer-Rosenberg NNMs are analyzed in the dynamics of some pendulum systems. The NNMs of forced vibrations are investigated in a rotor system with an isotropic-elastic shaft. A combination of the Shaw-Pierre NNMs and the Rauscher method is used to construct the forced NNMs and the frequency responses in the rotor dynamics.

  9. The free energy of a reaction coordinate at multiple constraints: a concise formulation

    NASA Astrophysics Data System (ADS)

    Schlitter, Jürgen; Klähn, Marco

    The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.

  10. Real-time deformations of organ based on structural mechanics for surgical simulators

    NASA Astrophysics Data System (ADS)

    Nakaguchi, Toshiya; Tagaya, Masashi; Tamura, Nobuhiko; Tsumura, Norimichi; Miyake, Yoichi

    2006-03-01

    This research proposes the deformation model of organs for the development of the medical training system using Virtual Reality (VR) technology. First, the proposed model calculates the strains of coordinate axis. Secondly, the deformation is obtained by mapping the coordinate of the object to the strained coordinate. We assume the beams in the coordinate space to calculate the strain of the coordinate axis. The forces acting on the object are converted to the forces applied to the beams. The bend and the twist of the beams are calculated based on the theory of structural mechanics. The bend is derived by the finite element method. We propose two deformation methods which differ in the position of the beams in the coordinate space. One method locates the beams along the three orthogonal axes (x, y, z). Another method locates the beam in the area where the deformation is large. In addition, the strain of the coordinate axis is attenuated in proportion to the distance from the point of action to consider the attenuation of the stress which is a viscoelastic feature of the organs. The proposed model needs less computational cost compared to the conventional deformation method since our model does not need to divide the object into the elasticity element. The proposed model was implemented in the laparoscopic surgery training system, and a real-time deformation can be realized.

  11. Cavity method for force transmission in jammed disordered packings of hard particles.

    PubMed

    Bo, Lin; Mari, Romain; Song, Chaoming; Makse, Hernán A

    2014-10-07

    The force distribution of jammed disordered packings has always been considered a central object in the physics of granular materials. However, many of its features are poorly understood. In particular, analytic relations to other key macroscopic properties of jammed matter, such as the contact network and its coordination number, are still lacking. Here we develop a mean-field theory for this problem, based on the consideration of the contact network as a random graph where the force transmission becomes a constraint satisfaction problem. We can thus use the cavity method developed in the past few decades within the statistical physics of spin glasses and hard computer science problems. This method allows us to compute the force distribution P(f) for random packings of hard particles of any shape, with or without friction. We find a new signature of jamming in the small force behavior P(f) ∼ f(θ), whose exponent has attracted recent active interest: we find a finite value for P(f = 0), along with θ = 0. Furthermore, we relate the force distribution to a lower bound of the average coordination number z[combining macron](μ) of jammed packings of frictional spheres with coefficient μ. This bridges the gap between the two known isostatic limits z[combining macron]c (μ = 0) = 2D (in dimension D) and z[combining macron]c(μ → ∞) = D + 1 by extending the naive Maxwell's counting argument to frictional spheres. The theoretical framework describes different types of systems, such as non-spherical objects in arbitrary dimensions, providing a common mean-field scenario to investigate force transmission, contact networks and coordination numbers of jammed disordered packings.

  12. Human-Human Interaction Forces and Interlimb Coordination During Side-by-Side Walking With Hand Contact.

    PubMed

    Sylos-Labini, Francesca; d'Avella, Andrea; Lacquaniti, Francesco; Ivanenko, Yury

    2018-01-01

    Handholding can naturally occur between two walkers. When people walk side-by-side, either with or without hand contact, they often synchronize their steps. However, despite the importance of haptic interaction in general and the natural use of hand contact between humans during walking, few studies have investigated forces arising from physical interactions. Eight pairs of adult subjects participated in this study. They walked on side-by-side treadmills at 4 km/h independently and with hand contact. Only hand contact-related sensory information was available for unintentional synchronization, while visual and auditory communication was obstructed. Subjects walked at their natural cadences or following a metronome. Limb kinematics, hand contact 3D interaction forces and EMG activity of 12 upper limb muscles were recorded. Overall, unintentional step frequency locking was observed during about 40% of time in 88% of pairs walking with hand contact. On average, the amplitude of contact arm oscillations decreased while the contralateral (free) arm oscillated in the same way as during normal walking. Interestingly, EMG activity of the shoulder muscles of the contact arm did not decrease, and their synergistic pattern remained similar. The amplitude of interaction forces and of trunk oscillations was similar for synchronized and non-synchronized steps, though the synchronized steps were characterized by significantly more regular orientations of interaction forces. Our results further support the notion that gait synchronization during natural walking is common, and that it may occur through interaction forces. Conservation of the proximal muscle activity of the contact (not oscillating) arm is consistent with neural coupling between cervical and lumbosacral pattern generation circuitries ("quadrupedal" arm-leg coordination) during human gait. Overall, the findings suggest that individuals might integrate force interaction cues to communicate and coordinate steps during walking.

  13. Human-Human Interaction Forces and Interlimb Coordination During Side-by-Side Walking With Hand Contact

    PubMed Central

    Sylos-Labini, Francesca; d'Avella, Andrea; Lacquaniti, Francesco; Ivanenko, Yury

    2018-01-01

    Handholding can naturally occur between two walkers. When people walk side-by-side, either with or without hand contact, they often synchronize their steps. However, despite the importance of haptic interaction in general and the natural use of hand contact between humans during walking, few studies have investigated forces arising from physical interactions. Eight pairs of adult subjects participated in this study. They walked on side-by-side treadmills at 4 km/h independently and with hand contact. Only hand contact-related sensory information was available for unintentional synchronization, while visual and auditory communication was obstructed. Subjects walked at their natural cadences or following a metronome. Limb kinematics, hand contact 3D interaction forces and EMG activity of 12 upper limb muscles were recorded. Overall, unintentional step frequency locking was observed during about 40% of time in 88% of pairs walking with hand contact. On average, the amplitude of contact arm oscillations decreased while the contralateral (free) arm oscillated in the same way as during normal walking. Interestingly, EMG activity of the shoulder muscles of the contact arm did not decrease, and their synergistic pattern remained similar. The amplitude of interaction forces and of trunk oscillations was similar for synchronized and non-synchronized steps, though the synchronized steps were characterized by significantly more regular orientations of interaction forces. Our results further support the notion that gait synchronization during natural walking is common, and that it may occur through interaction forces. Conservation of the proximal muscle activity of the contact (not oscillating) arm is consistent with neural coupling between cervical and lumbosacral pattern generation circuitries (“quadrupedal” arm-leg coordination) during human gait. Overall, the findings suggest that individuals might integrate force interaction cues to communicate and coordinate steps during walking. PMID:29563883

  14. New coordination features; A bridging pyridine and the forced shortest non-covalent distance between two CO 3 2- species

    DOE PAGES

    Velasco, V.; Aguilà, D.; Barrios, L. A.; ...

    2014-09-29

    The aerobic reaction of the multidentate ligand 2,6-bis-(3-oxo-3-(2-hydroxyphenyl)-propionyl)-pyridine, H 4L, with Co (II) salts in strong basic conditions produces the clusters [Co 4(L) 2(OH)(py) 7]NO 3 (1) and [Co 8Na 4(L) 4(OH) 2(CO 3) 2(py) 10](BF 4) 2 (2). Analysis of their structure unveils unusual coordination features including a very rare bridging pyridine ligand or two trapped carbonate anions within one coordination cage, forced to stay at an extremely close distance (d O···O = 1.946 Å). This unprecedented non-bonding proximity represents a meeting point between long covalent interactions and “intermolecular” contacts. These original motifs have been analysed here through DFTmore » calculations, which have yielded interaction energies and the reduced repulsion energy experimented by both CO 3 2- anions when located in close proximity inside the coordination cage.« less

  15. A comparison of force fields and calculation methods for vibration intervals of isotopic H3(+) molecules

    NASA Astrophysics Data System (ADS)

    Carney, G. D.; Adler-Golden, S. M.; Lesseski, D. C.

    1986-04-01

    This paper reports (1) improved values for low-lying vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) calculated using the variational method and Simons-Parr-Finlan (1973) representations of the Carney-Porter (1976) and Dykstra-Swope (1979) ab initio H3(+) potential energy surfaces, (2) quartic normal coordinate force fields for isotopic H3(+) molecules, (3) comparisons of variational and second-order perturbation theory, and (4) convergence properties of the Lai-Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) for these potential surfaces are 6.9 (Carney-Porter) and 1.2/cm (Dykstra-Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10/cm for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed 't' coordinate Hamiltonian for these molecules, except in the case of H2D(+).

  16. Stability of steady hand force production explored across spaces and methods of analysis.

    PubMed

    de Freitas, Paulo B; Freitas, Sandra M S F; Lewis, Mechelle M; Huang, Xuemei; Latash, Mark L

    2018-06-01

    We used the framework of the uncontrolled manifold (UCM) hypothesis and explored the reliability of several outcome variables across different spaces of analysis during a very simple four-finger accurate force production task. Fourteen healthy, young adults performed the accurate force production task with each hand on 3 days. Small spatial finger perturbations were generated by the "inverse piano" device three times per trial (lifting the fingers 1 cm/0.5 s and lowering them). The data were analyzed using the following main methods: (1) computation of indices of the structure of inter-trial variance and motor equivalence in the space of finger forces and finger modes, and (2) analysis of referent coordinates and apparent stiffness values for the hand. Maximal voluntary force and the index of enslaving (unintentional finger force production) showed good to excellent reliability. Strong synergies stabilizing total force were reflected in both structure of variance and motor equivalence indices. Variance within the UCM and the index of motor equivalent motion dropped over the trial duration and showed good to excellent reliability. Variance orthogonal to the UCM and the index of non-motor equivalent motion dropped over the 3 days and showed poor to moderate reliability. Referent coordinate and apparent stiffness indices co-varied strongly and both showed good reliability. In contrast, the computed index of force stabilization showed poor reliability. The findings are interpreted within the scheme of neural control with referent coordinates involving the hierarchy of two basic commands, the r-command and c-command. The data suggest natural drifts in the finger force space, particularly within the UCM. We interpret these drifts as reflections of a trade-off between stability and optimization of action. The implications of these findings for the UCM framework and future clinical applications are explored in the discussion. Indices of the structure of variance and motor equivalence show good reliability and can be recommended for applied studies.

  17. Chaos as an intermittently forced linear system.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan

    2017-05-30

    Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.

  18. The Air Force Interactive Meteorological System: A Research Tool for Satellite Meteorology

    DTIC Science & Technology

    1992-12-02

    NFARnet itself is a subnet to the global computer network INTERNET that links nearly all U.S. government research facilities and universi- ties along...required input to a generalized mathematical solution to the satellite/earth coordinate transform used for earth location of GOES sensor data. A direct...capability also exists to convert absolute coordinates to relative coordinates for transformations associated with gridded fields. 3. Spatial objective

  19. Learning and coordinating in a multilayer network

    PubMed Central

    Lugo, Haydée; Miguel, Maxi San

    2015-01-01

    We introduce a two layer network model for social coordination incorporating two relevant ingredients: a) different networks of interaction to learn and to obtain a pay-off, and b) decision making processes based both on social and strategic motivations. Two populations of agents are distributed in two layers with intralayer learning processes and playing interlayer a coordination game. We find that the skepticism about the wisdom of crowd and the local connectivity are the driving forces to accomplish full coordination of the two populations, while polarized coordinated layers are only possible for all-to-all interactions. Local interactions also allow for full coordination in the socially efficient Pareto-dominant strategy in spite of being the riskier one. PMID:25585934

  20. Joint Space Forces in Theater: Coordination is No Longer Sufficient

    DTIC Science & Technology

    2007-04-01

    importantly, I am indebted to my wife, Caroline , and twin boys Joshua and Justin for their encouragement and constant reminders about what is truly...of GPS guided JDAMs fundamentally changed the American way of war. Lieutenant General Daniel Leaf , Air Component Coordination Element Commander

  1. 32 CFR 903.6 - Reassignment of Air Force members to become cadet candidates at the preparatory school.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... members at technical training schools remain there in casual status until the earliest reporting date for the HQ USAFA/PL. Students must not leave their training school without coordinating with HQ USAFA/RR. ... (Continued) DEPARTMENT OF THE AIR FORCE MILITARY TRAINING AND SCHOOLS AIR FORCE ACADEMY PREPARATORY SCHOOL...

  2. Reliability of Metrics Associated with a Counter-Movement Jump Performed on a Force Plate

    ERIC Educational Resources Information Center

    Lombard, Wayne; Reid, Sorrel; Pearson, Keagan; Lambert, Michael

    2017-01-01

    The counter-movement jump is a consequence of maximal force, rate of force developed, and neuromuscular coordination. Thus, the counter-movement jump has been used to monitor various training adaptations. However, the smallest detectable difference of counter-movement jump metrics has yet to be established. The objective of the present study was…

  3. Position And Force Control For Multiple-Arm Robots

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.

    1988-01-01

    Number of arms increased without introducing undue complexity. Strategy and computer architecture developed for simultaneous control of positions of number of robot arms manipulating same object and of forces and torques that arms exert on object. Scheme enables coordinated manipulation of object, causing it to move along assigned trajectory and be subjected to assigned internal forces and torques.

  4. Report of the Task Force on Continuing Education and Non-Credit Instruction.

    ERIC Educational Resources Information Center

    Ernest, Richard J.; And Others

    The Task Force on Continuing Education and Non-Credit Instruction was appointed to develop specific strategies for expanding lifelong learning and non-credit instruction in the Virginia community colleges. The task force reviewed a report on the state funding of non-credit instruction; wrote to the community college coordinating offices in 17…

  5. Comparison of radiated noise from shrouded and unshrouded propellers

    NASA Technical Reports Server (NTRS)

    Eversman, Walter

    1992-01-01

    The ducted propeller in a free field is modeled using the finite element method. The generation, propagation, and radiation of sound from a ducted fan is described by the convened wave equation with volumetric body forces. Body forces are used to introduce the blade loading for rotating blades and stationary exit guide vanes. For an axisymmetric nacelle or shroud, the problem is formulated in cylindrical coordinates. For a specified angular harmonic, the angular coordinate is eliminated, resulting in a two-dimensional representation. A finite element discretization based on nine-node quadratic isoparametric elements is used.

  6. Understanding chemical binding using the Berlin function and the reaction force

    NASA Astrophysics Data System (ADS)

    Chakraborty, Debajit; Cárdenas, Carlos; Echegaray, Eleonora; Toro-Labbe, Alejandro; Ayers, Paul W.

    2012-06-01

    We use the derivative of the electron density with respect to the reaction coordinate, interpreted through the Berlin binding function, to identify portions of the reaction path where chemical bonds are breaking and forming. The results agree with the conventional description for SN2 reactions, but they are much more general and can be used to elucidate other types of reactions also. Our analysis offers support for, and detailed information about, the use of the reaction force profile to separate the reaction coordinates into intervals, each with characteristic extents of geometry change and electronic rearrangement.

  7. On the Use of Quartic Force Fields in Variational Calculations

    NASA Technical Reports Server (NTRS)

    Fortenberry, Ryan C.; Huang, Xinchuan; Yachmenev, Andrey; Thiel, Walter; Lee, Timothy J.

    2013-01-01

    The use of quartic force fields (QFFs) has been shown to be one of the most effective ways to efficiently compute vibrational frequencies for small molecules. In this paper we outline and discuss how the simple-internal or bond-length bond-angle (BLBA) coordinates can be transformed into Morse-cosine(-sine) coordinates which produce potential energy surfaces from QFFs that possess proper limiting behavior and can effectively describe the vibrational (or rovibrational) energy levels of an arbitrary molecular system. We investigate parameter scaling in the Morse coordinate, symmetry considerations, and examples of transformed QFFs making use of the MULTIMODE, TROVE, and VTET variational vibrational methods. Cases are referenced where variational computations coupled with transformed QFFs produce accuracies compared to experiment for fundamental frequencies on the order of 5 cm(exp -1) and often as good as 1 cm(exp -1).

  8. A General Pressure Gradient Formulation for Ocean Models, Part 1: Scheme Design and Diagnostic Analysis, Part II: Energy, Momentum, and Bottom Torque Consistency

    NASA Technical Reports Server (NTRS)

    Song, Y. T.

    1998-01-01

    A Jacobian formulation of the pressure gradient force for use in models with topography following coordinates is proposed. It can be used in conjunction with any vertical coordinate system and is easily implemented.

  9. Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example

    NASA Technical Reports Server (NTRS)

    Smart, D. F.; Shea, M. A.

    2001-01-01

    The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.

  10. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.

  11. Finger force changes in the absence of visual feedback in patients with Parkinson’s disease

    PubMed Central

    Jo, Hang Jin; Ambike, Satyajit; Lewis, Mechelle M.; Huang, Xuemei; Latash, Mark L.

    2015-01-01

    Objectives We investigated the unintentional drift in total force and in sharing of the force between fingers in two-finger accurate force production tasks performed without visual feedback by patients with Parkinson’s disease (PD) and healthy controls. In particular, we were testing a hypothesis that adaptation to the documented loss of action stability could lead to faster force drop in PD. Methods PD patients and healthy controls performed accurate constant force production tasks without visual feedback by different finger pairs, starting with different force levels and different sharing patterns of force between the two fingers. Results Both groups showed an exponential force drop with time and a drift of the sharing pattern towards 50:50. The PD group showed a significantly faster force drop without a change in speed of the sharing drift. These results were consistent across initial force levels, sharing patterns, and finger pairs. A pilot test of four subjects, two PD and two controls, showed no consistent effects of memory on the force drop. Conclusions We interpret the force drop as a consequence of back-coupling between the actual and referent finger coordinates that draws the referent coordinate towards the actual one. The faster force drop in the PD group is interpreted as adaptive to the loss of action stability in PD. The lack of group differences in the sharing drift suggests two potentially independent physiological mechanisms contributing to the force and sharing drifts. Significance The hypothesis on adaptive changes in PD with the purpose to ensure stability of steady states may have important implications for treatment of PD. The speed of force drop may turn into a useful tool to quantify such adaptive changes. PMID:26072437

  12. Air Force Health Study. An Epidemiologic Investigation of Health Effects in Air Force Personnel Following Exposure to Herbicides. Volume 1

    DTIC Science & Technology

    1991-03-01

    found to be significantly associated with coordination and a central nervous system index, but cranial nerve function and peripheral nerve status...AD-A237 516 Air Force Health Study A An Epidemiologic In vestigation of Health Effects in Air Force Personnel Following Exposure to Herbicides SAIC...Smeda SCIENCE APPLICATIONS EPIDEMIOLOGY RESEARCH DIVISION INTERNATIONAL CORPORATION ARMSTRONG LABORATORY 8400 Westpark Drive HUMAN SYSTEMS DIVISION

  13. Vibrational quasi-degenerate perturbation theory with optimized coordinates: applications to ethylene and trans-1,3-butadiene.

    PubMed

    Yagi, Kiyoshi; Otaki, Hiroki

    2014-02-28

    A perturbative extension to optimized coordinate vibrational self-consistent field (oc-VSCF) is proposed based on the quasi-degenerate perturbation theory (QDPT). A scheme to construct the degenerate space (P space) is developed, which incorporates degenerate configurations and alleviates the divergence of perturbative expansion due to localized coordinates in oc-VSCF (e.g., local O-H stretching modes of water). An efficient configuration selection scheme is also implemented, which screens out the Hamiltonian matrix element between the P space configuration (p) and the complementary Q space configuration (q) based on a difference in their quantum numbers (λpq = ∑s|ps - qs|). It is demonstrated that the second-order vibrational QDPT based on optimized coordinates (oc-VQDPT2) smoothly converges with respect to the order of the mode coupling, and outperforms the conventional one based on normal coordinates. Furthermore, an improved, fast algorithm is developed for optimizing the coordinates. First, the minimization of the VSCF energy is conducted in a restricted parameter space, in which only a portion of pairs of coordinates is selectively transformed. A rational index is devised for this purpose, which identifies the important coordinate pairs to mix from others that may remain unchanged based on the magnitude of harmonic coupling induced by the transformation. Second, a cubic force field (CFF) is employed in place of a quartic force field, which bypasses intensive procedures that arise due to the presence of the fourth-order force constants. It is found that oc-VSCF based on CFF together with the pair selection scheme yields the coordinates similar in character to the conventional ones such that the final vibrational energy is affected very little while gaining an order of magnitude acceleration. The proposed method is applied to ethylene and trans-1,3-butadiene. An accurate, multi-resolution potential, which combines the MP2 and coupled-cluster with singles, doubles, and perturbative triples level of electronic structure theory, is generated and employed in the oc-VQDPT2 calculation to obtain the fundamental tones as well as selected overtones/combination tones coupled to the fundamentals through the Fermi resonance. The calculated frequencies of ethylene and trans-1,3-butadiene are found to be in excellent agreement with the experimental values with a mean absolute error of 8 and 9 cm(-1), respectively.

  14. 26 CFR 301.7654-1 - Coordination of U.S. and Guam individual income taxes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... taxpayer shall be determined by taking into account any compensation of any member of the Armed Forces for... any member of the Armed Forces described in paragraph (a)(2) of this section which is paid to Guam... income of members of the Armed Forces shall not be taken into account. For purposes of this subparagraph...

  15. 26 CFR 301.7654-1 - Coordination of U.S. and Guam individual income taxes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Armed Forces of the United States, the special procedure agreed upon with the Department of Defense in... taxpayer shall be determined by taking into account any compensation of any member of the Armed Forces for... any member of the Armed Forces described in paragraph (a)(2) of this section which is paid to Guam...

  16. Adaptation of multijoint coordination during standing balance in healthy young and healthy old individuals

    PubMed Central

    Pasma, J. H.; Schouten, A. C.; Aarts, R. G. K. M.; Meskers, C. G. M.; Maier, A. B.; van der Kooij, H.

    2015-01-01

    Standing balance requires multijoint coordination between the ankles and hips. We investigated how humans adapt their multijoint coordination to adjust to various conditions and whether the adaptation differed between healthy young participants and healthy elderly. Balance was disturbed by push/pull rods, applying two continuous and independent force disturbances at the level of the hip and between the shoulder blades. In addition, external force fields were applied, represented by an external stiffness at the hip, either stabilizing or destabilizing the participants' balance. Multivariate closed-loop system-identification techniques were used to describe the neuromuscular control mechanisms by quantifying the corrective joint torques as a response to body sway, represented by frequency response functions (FRFs). Model fits on the FRFs resulted in an estimation of time delays, intrinsic stiffness, reflexive stiffness, and reflexive damping of both the ankle and hip joint. The elderly generated similar corrective joint torques but had reduced body sway compared with the young participants, corresponding to the increased FRF magnitude with age. When a stabilizing or destabilizing external force field was applied at the hip, both young and elderly participants adapted their multijoint coordination by lowering or respectively increasing their neuromuscular control actions around the ankles, expressed in a change of FRF magnitude. However, the elderly adapted less compared with the young participants. Model fits on the FRFs showed that elderly had higher intrinsic and reflexive stiffness of the ankle, together with higher time delays of the hip. Furthermore, the elderly adapted their reflexive stiffness around the ankle joint less compared with young participants. These results imply that elderly were stiffer and were less able to adapt to external force fields. PMID:26719084

  17. Crossover from layering to island formation in Langmuir-Blodgett growth: Role of long-range intermolecular forces

    NASA Astrophysics Data System (ADS)

    Mukherjee, Smita; Datta, Alokmay

    2011-04-01

    Combined studies by atomic force microscopy, x-ray reflectivity, and Fourier transform infrared spectroscopy on transition-metal stearate (M-St, M = Mn, Co, Zn, and Cd) Langmuir-Blodgett films clearly indicate association of bidentate coordination of the metal-carboxylate head group to layer-by-layer growth as observed in MnSt and CoSt and partially in ZnSt. Crossover to islandlike growth, as observed in CdSt and ZnSt, is associated with the presence of unidentate coordination in the head group. Morphological evolutions as obtained from one, three, and nine monolayers (MLs) of M-St films are consistent with Frank van der Merwe, Stranski-Krastanov, and Volmer Weber growth modes for M=Mn/Co, Zn, and Cd, respectively, as previously assigned, and are found to vary with number (n) of metal atoms per head group, viz. n=1 (Mn/Co), n=0.75 (Zn), and n=0.5 (Cd). The parameter n is found to decide head-group coordination such that n=1.0 corresponds to bidentate and n=0.5 corresponds to unidentate coordination; the intermediate value in Zn corresponds to a mixture of both. The dependence of the growth mode on head-group structure is explained by the fact that in bidentate head groups, with the in-plane dipole moment being zero, intermolecular forces between adjacent molecules are absent and hence growth proceeds via layering. On the other hand, in unidentate head groups, the existence of a nonzero in-plane dipole moment results in the development of weak in-plane intermolecular forces between adjacent molecules causing in-plane clustering leading to islandlike growth.

  18. Complexation Enhancement Drives Water-to-Oil Ion Transport: A Simulation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Baofu; Ferru, Geoffroy; Ellis, Ross J.

    We address the structures and energetics of ion solvation in aqueous and organic solutions to understand liquid-liquid ion transport. Atomistic molecular dynamics (MD) simulations with polarizable force field are performed to study the coordination transformations driving lanthanide (Ln(III)) and nitrate ion transport between aqueous and an alkylamide-oil solution. An enhancement of the coordination behavior in the organic phase is achieved in contrast with the aqueous solution. In particular, the coordination number of Ce3+ increases from 8.9 in the aqueous to 9.9 in the organic solutions (from 8 in the aqueous to 8.8 in the organic systems for Yb3+). Moreover, themore » local coordination environ ment changes dramatically. Potential of mean force calculations show that the Ln(III)-ligand coordination interaction strengths follow the order of Ln(III-)nitrate> Ln(III)-water>Ln(III)-DMDBTDMA. They increase 2-fold in the lipophilic environment in comparison to the aqueous phase, and we attribute this to the shedding of the outer solvation shell. Our findings highlight the importance of outer sphere interactions on the competitive solvation energetics that cause ions to migrate between immiscible phases; an essential ingredient for advancing important applications such as rare earth metal separations. Some open questions in simulating the coordination behavior of heavy metals are also addressed.« less

  19. The lanthanide contraction beyond coordination chemistry

    DOE PAGES

    Ferru, Geoffroy; Reinhart, Benjamin; Bera, Mrinal K.; ...

    2016-04-06

    Lanthanide chemistry is dominated by the ‘lanthanide contraction’, which is conceptualized traditionally through coordination chemistry. Here we break this mold, presenting evidence that the lanthanide contraction manifests outside of the coordination sphere, influencing weak interactions between groups of molecules that drive mesoscale-assembly and emergent behavior in an amphiphile solution. Furthermore, changes in these weak interactions correlate with differences in lanthanide ion transport properties, suggesting new forces to leverage rare earth separation and refining. Our results show that the lanthanide contraction paradigm extends beyond the coordination sphere, influencing structure and properties usually associated with soft matter science.

  20. The lanthanide contraction beyond coordination chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferru, Geoffroy; Reinhart, Benjamin; Bera, Mrinal K.

    Lanthanide chemistry is dominated by the ‘lanthanide contraction’, which is conceptualized traditionally through coordination chemistry. Here we break this mold, presenting evidence that the lanthanide contraction manifests outside of the coordination sphere, influencing weak interactions between groups of molecules that drive mesoscale-assembly and emergent behavior in an amphiphile solution. Furthermore, changes in these weak interactions correlate with differences in lanthanide ion transport properties, suggesting new forces to leverage rare earth separation and refining. Our results show that the lanthanide contraction paradigm extends beyond the coordination sphere, influencing structure and properties usually associated with soft matter science.

  1. Tailoring Programs for Better Fit. The Key to Coordination.

    ERIC Educational Resources Information Center

    Ferrero, Lee

    1994-01-01

    The most serious problem with the current work force preparation system is that many employment and training programs operate today to serve roughly the same people. Instead, these programs should be coordinated better to lower costs in the face of lowered funding. The General Accounting Office reports that about 125 federal programs do…

  2. Information Literacy: A Story of Collaboration and Cooperation between the Writing Program Coordinator and Colleagues 2003-2010

    ERIC Educational Resources Information Center

    Corso, Gail S.; Weiss, Sandra; McGregor, Tiffany

    2010-01-01

    This narrative describes collaboration among librarians, writing program coordinator, and professors on an information literacy task force. Their attempts to infuse the University's curriculum with information literacy are described. Authors define the term, explain its history with three professional organizations, and describe processes for…

  3. Overcoming free energy barriers using unconstrained molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hénin, Jérôme; Chipot, Christophe

    2004-08-01

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate ξ is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of ξ is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method.

  4. Overcoming free energy barriers using unconstrained molecular dynamics simulations.

    PubMed

    Hénin, Jérôme; Chipot, Christophe

    2004-08-15

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.

  5. New nonlinear control algorithms for multiple robot arms

    NASA Technical Reports Server (NTRS)

    Tarn, T. J.; Bejczy, A. K.; Yun, X.

    1988-01-01

    Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.

  6. Coordinated gripping of substrate by subunits of a AAA+ proteolytic machine

    PubMed Central

    Iosefson, Ohad; Nager, Andrew R.; Baker, Tania A.; Sauer, Robert T.

    2014-01-01

    Hexameric AAA+ unfoldases of ATP-dependent proteases and protein-remodeling machines use conserved loops that line the axial pore to apply force to substrates during the mechanical processes of protein unfolding and translocation. Whether loops from multiple subunits act independently or coordinately in these processes is a critical aspect of mechanism but is currently unknown for any AAA+ machine. By studying covalently linked hexamers of the E. coli ClpX unfoldase bearing different numbers and configurations of wild-type and mutant pore loops, we show that loops function synergistically, with the number of wild-type loops required for efficient degradation depending upon the stability of the protein substrate. Our results support a mechanism in which a power stroke initiated in one subunit of the ClpX hexamer results in the concurrent movement of all six pore loops, which coordinately grip and apply force to the substrate. PMID:25599533

  7. The Motor and the Brake of the Trailing Leg in Human Walking: Leg Force Control Through Ankle Modulation and Knee Covariance

    PubMed Central

    Toney, Megan E.; Chang, Young-Hui

    2016-01-01

    Human walking is a complex task, and we lack a complete understanding of how the neuromuscular system organizes its numerous muscles and joints to achieve consistent and efficient walking mechanics. Focused control of select influential task-level variables may simplify the higher-level control of steady state walking and reduce demand on the neuromuscular system. As trailing leg power generation and force application can affect the mechanical efficiency of step-to-step transitions, we investigated how joint torques are organized to control leg force and leg power during human walking. We tested whether timing of trailing leg force control corresponded with timing of peak leg power generation. We also applied a modified uncontrolled manifold analysis to test whether individual or coordinated joint torque strategies most contributed to leg force control. We found that leg force magnitude was adjusted from step-to-step to maintain consistent leg power generation. Leg force modulation was primarily determined by adjustments in the timing of peak ankle plantar-flexion torque, while knee torque was simultaneously covaried to dampen the effect of ankle torque on leg force. We propose a coordinated joint torque control strategy in which the trailing leg ankle acts as a motor to drive leg power production while trailing leg knee torque acts as a brake to refine leg power production. PMID:27334888

  8. Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces

    NASA Astrophysics Data System (ADS)

    Klink, W. H.; Wickramasekara, S.

    2016-06-01

    One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner-Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated in quantum theory.

  9. A Force-Velocity Relationship and Coordination Patterns in Overarm Throwing

    PubMed Central

    van den Tillaar, Roland; Ettema, Gertjan

    2004-01-01

    A force-velocity relationship in overarm throwing was determined using ball weights varying from 0.2 to 0.8 kg. Seven experienced handball players were filmed at 240 frames per second. Velocity of joints of the upper extremity and ball together with the force on the ball were derived from the data. A statistically significant negative relationship between force and maximal ball velocity, as well as between ball weight and maximal ball velocity was observed. Also, with increase of ball weight the total throwing movement time increased. No significant change in relative timing of the different joints was demonstrated, suggesting that the subjects did not change their “global ”coordination pattern (kinematics) within the tested range of ball weights. A simple model revealed that 67% of ball velocity at ball release was explained by the summation of effects from the velocity of elbow extension and internal rotation of the shoulder. With regard to the upper extremity the internal rotation of the shoulder and elbow extension are two important contributors to the total ball velocity at release. Key Points An inverse relationship between load and velocity and a linear force-velocity exists in overarm throwing with ball weights varying from 0.2 to 0.8 kg. Qualitatively, no changes in coordination pattern (relative timing) occur with increasing ball weight within the tested range of ball weights. The absolute throwing movement time increased with ball weight. Quantitatively, with regard to the upper extremity, the internal rotation of the shoulder and elbow extension are two important contributors to the total ball velocity at release. PMID:24624005

  10. Interpolation schemes for peptide rearrangements.

    PubMed

    Bauer, Marianne S; Strodel, Birgit; Fejer, Szilard N; Koslover, Elena F; Wales, David J

    2010-02-07

    A variety of methods (in total seven) comprising different combinations of internal and Cartesian coordinates are tested for interpolation and alignment in connection attempts for polypeptide rearrangements. We consider Cartesian coordinates, the internal coordinates used in CHARMM, and natural internal coordinates, each of which has been interfaced to the OPTIM code and compared with the corresponding results for united-atom force fields. We show that aligning the methylene hydrogens to preserve the sign of a local dihedral angle, rather than minimizing a distance metric, provides significant improvements with respect to connection times and failures. We also demonstrate the superiority of natural coordinate methods in conjunction with internal alignment. Checking the potential energy of the interpolated structures can act as a criterion for the choice of the interpolation coordinate system, which reduces failures and connection times significantly.

  11. A Meta-Model Architecture for Fusing Battlefield Information

    DTIC Science & Technology

    2005-05-01

    that a body of force acts as a (possibly loosely) coordinated organization. The totality of actions motivated by force intent define an operational...assume that deception and operational errors represent a minority propotion of the total evidence present on the battlefield based on the principles of

  12. 32 CFR 806.12 - Record availability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... manager, in coordination with the functional OPR, or the owner of the records, determines qualifying... clearance of these records with the PAO before posting on the WWW. (b) Normally, if the FOIA office or OPR... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ADMINISTRATION AIR FORCE...

  13. 32 CFR 806.12 - Record availability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... manager, in coordination with the functional OPR, or the owner of the records, determines qualifying... clearance of these records with the PAO before posting on the WWW. (b) Normally, if the FOIA office or OPR... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ADMINISTRATION AIR FORCE...

  14. 32 CFR 806.12 - Record availability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... manager, in coordination with the functional OPR, or the owner of the records, determines qualifying... clearance of these records with the PAO before posting on the WWW. (b) Normally, if the FOIA office or OPR... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ADMINISTRATION AIR FORCE...

  15. 32 CFR 806.12 - Record availability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... manager, in coordination with the functional OPR, or the owner of the records, determines qualifying... clearance of these records with the PAO before posting on the WWW. (b) Normally, if the FOIA office or OPR... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ADMINISTRATION AIR FORCE...

  16. 32 CFR 806.12 - Record availability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... manager, in coordination with the functional OPR, or the owner of the records, determines qualifying... clearance of these records with the PAO before posting on the WWW. (b) Normally, if the FOIA office or OPR... National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ADMINISTRATION AIR FORCE...

  17. Modifying upper-limb inter-joint coordination in healthy subjects by training with a robotic exoskeleton.

    PubMed

    Proietti, Tommaso; Guigon, Emmanuel; Roby-Brami, Agnès; Jarrassé, Nathanaël

    2017-06-12

    The possibility to modify the usually pathological patterns of coordination of the upper-limb in stroke survivors remains a central issue and an open question for neurorehabilitation. Despite robot-led physical training could potentially improve the motor recovery of hemiparetic patients, most of the state-of-the-art studies addressing motor control learning, with artificial virtual force fields, only focused on the end-effector kinematic adaptation, by using planar devices. Clearly, an interesting aspect of studying 3D movements with a robotic exoskeleton, is the possibility to investigate the way the human central nervous system deals with the natural upper-limb redundancy for common activities like pointing or tracking tasks. We asked twenty healthy participants to perform 3D pointing or tracking tasks under the effect of inter-joint velocity dependant perturbing force fields, applied directly at the joint level by a 4-DOF robotic arm exoskeleton. These fields perturbed the human natural inter-joint coordination but did not constrain directly the end-effector movements and thus subjects capability to perform the tasks. As a consequence, while the participants focused on the achievement of the task, we unexplicitly modified their natural upper-limb coordination strategy. We studied the force fields direct effect on pointing movements towards 8 targets placed in the 3D peripersonal space, and we also considered potential generalizations on 4 distinct other targets. Post-effects were studied after the removal of the force fields (wash-out and follow up). These effects were quantified by a kinematic analysis of the pointing movements at both end-point and joint levels, and by a measure of the final postures. At the same time, we analysed the natural inter-joint coordination through PCA. During the exposition to the perturbative fields, we observed modifications of the subjects movement kinematics at every level (joints, end-effector, and inter-joint coordination). Adaptation was evidenced by a partial decrease of the movement deviations due to the fields, during the repetitions, but it occurred only on 21% of the motions. Nonetheless post-effects were observed in 86% of cases during the wash-out and follow up periods (right after the removal of the perturbation by the fields and after 30 minutes of being detached from the exoskeleton). Important inter-individual differences were observed but with small variability within subjects. In particular, a group of subjects showed an over-shoot with respect to the original unexposed trajectories (in 30% of cases), but the most frequent consequence (in 55% of cases) was the partial persistence of the modified upper-limb coordination, adopted at the time of the perturbation. Temporal and spatial generalizations were also evidenced by the deviation of the movement trajectories, both at the end-effector and at the intermediate joints and the modification of the final pointing postures towards targets which were never exposed to any field. Such results are the first quantified characterization of the effects of modification of the upper-limb coordination in healthy subjects, by imposing modification through viscous force fields distributed at the joint level, and could pave the way towards opportunities to rehabilitate pathological arm synergies with robots.

  18. Unintentional force changes in cyclical tasks performed by an abundant system: Empirical observations and a dynamical model.

    PubMed

    Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M; Latash, Mark L

    2017-05-14

    The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1Hz, paced by an auditory metronome. One - Force task - required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task - Share task - required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Unintentional force changes in cyclical tasks performed by an abundant system: Empirical observations and a dynamical model

    PubMed Central

    Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M.; Latash, Mark L.

    2017-01-01

    The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1 Hz, paced by an auditory metronome. One – Force task – required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task – Share task – required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. PMID:28344070

  20. Nuclear spin relaxation in ligands outside of the first coordination sphere in a gadolinium (III) complex: Effects of intermolecular forces

    NASA Astrophysics Data System (ADS)

    Kruk, Danuta; Kowalewski, Jozef

    2002-07-01

    This article describes paramagnetic relaxation enhancement (PRE) in systems with high electron spin, S, where there is molecular interaction between a paramagnetic ion and a ligand outside of the first coordination sphere. The new feature of our treatment is an improved handling of the electron-spin relaxation, making use of the Redfield theory. Following a common approach, a well-defined second coordination sphere is assumed, and the PRE contribution from these more distant and shorter-lived ligands is treated in a way similar to that used for the first coordination sphere. This model is called "ordered second sphere," OSS. In addition, we develop here a formalism similar to that of Hwang and Freed [J. Chem. Phys. 63, 4017 (1975)], but accounting for the electron-spin relaxation effects. We denote this formalism "diffuse second sphere," DSS. The description of the dynamics of the intermolecular dipole-dipole interaction is based on the Smoluchowski equation, with a potential of mean force related to the radial distribution function. We have used a finite-difference method to calculate numerically a correlation function for translational motion, taking into account the intermolecular forces leading to an arbitrary radial distribution of the ligand protons. The OSS and DSS models, including the Redfield description of the electron-spin relaxation, were used to interpret the PRE in an aqueous solution of a slowly rotating gadolinium (III) complex (S=7/2) bound to a protein.

  1. The synergic control of multi-finger force production: Stability of explicit and implicit task components

    PubMed Central

    Reschechtko, Sasha; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    Manipulating objects with the hands requires the accurate production of resultant forces including shear forces; effective control of these shear forces also requires the production of internal forces normal to the surface of the object(s) being manipulated. In the present study, we investigated multi-finger synergies stabilizing shear and normal components of force, as well as drifts in both components of force, during isometric pressing tasks requiring a specific magnitude of shear force production. We hypothesized that shear and normal forces would evolve similarly in time, and also show similar stability properties as assessed by the decomposition of inter-trial variance within the uncontrolled manifold hypothesis. Healthy subjects were required to accurately produce total shear and total normal forces with four fingers of the hand during a steady-state force task (with and without visual feedback) and a self-paced force pulse task. The two force components showed similar time profiles during both shear force pulse production and unintentional drift induced by turning the visual feedback off. Only the explicitly instructed components of force, however, were stabilized with multi-finger synergies. No force-stabilizing synergies and no anticipatory synergy adjustments were seen for the normal force in shear force production trials. These unexpected qualitative differences in the control of the two force components – which are produced by some of the same muscles and show high degree of temporal coupling – are interpreted within the theory of control with referent coordinates for salient variables. These observations suggest the existence of two classes of neural variables: one that translates into shifts of referent coordinates and defines changes in magnitude of salient variables, and the other controlling gains in back-coupling loops that define stability of the salient variables. Only the former are shared between the explicit and implicit task components. PMID:27601252

  2. Normal coordinate analysis of the vibrational spectrum of benzil molecule

    NASA Astrophysics Data System (ADS)

    Volovšek, V.; Colombo, L.

    1993-03-01

    Normal coordinate analysis is performed for the benzil molecule. Force constants of phenyl rings are transferred from earlier studies on binuclear aromatic molecules. The existance of some low-frequency internal modes have been proved, thus eliminating the earlier explanations of the excess of the bands observed in the low-frequency Raman and FIR spectra of benzil crystal.

  3. 32 CFR 700.202 - Mission of the Department of the Navy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of the Navy, shall be organized, trained, and equipped primarily for prompt and sustained combat... organized, trained, and equipped to provide fleet marine forces of combined arms, together with supporting... organized. (d) The Marine Corps shall develop, in coordination with the Army and the Air Force, those phases...

  4. Sustainability Research Supporting Gulf of Mexico Ecosystem Restoration: EPA’s Office of Research and Development

    EPA Science Inventory

    The Gulf Ecosystem Restoration Task Force was formed by Executive Order, October 2010. The Task Force leads and coordinates research in support of ecosystem restoration planning and decision-making in the Gulf Coast region. In support of a comprehensive restoration strategy, re...

  5. Important Competencies for Future Health and Wellness Professionals: An Investigation of Employer Desired Skills

    ERIC Educational Resources Information Center

    Becker, Craig; Loy, Marty

    2004-01-01

    This study was designed to investigate the validity of the professional competencies developed by the Association of Worksite Health Promotion (AWHP) Professional Standards Task Force. The Task Force identified a competency framework that included business skills, program coordination skills, and human resource skills with corresponding…

  6. A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere

    NASA Astrophysics Data System (ADS)

    Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael

    2017-05-01

    We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.

  7. The evolution of parental care in insects: A test of current hypotheses

    PubMed Central

    Gilbert, James D J; Manica, Andrea

    2015-01-01

    Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced—for example, by caring for multiple broods—but rejecting the “enhanced fecundity” hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. PMID:25825047

  8. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  9. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  10. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  11. In search of robust flood risk management alternatives for the Netherlands

    NASA Astrophysics Data System (ADS)

    Klijn, F.; Knoop, J. M.; Ligtvoet, W.; Mens, M. J. P.

    2012-05-01

    The Netherlands' policy for flood risk management is being revised in view of a sustainable development against a background of climate change, sea level rise and increasing socio-economic vulnerability to floods. This calls for a thorough policy analysis, which can only be adequate when there is agreement about the "framing" of the problem and about the strategic alternatives that should be taken into account. In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood risks, and defined a number of different strategic alternatives for flood risk management at the national level. These alternatives, ranging from flood protection by brute force to reduction of the vulnerability by spatial planning only, were compared with continuation of the current policy on a number of criteria, comprising costs, the reduction of fatality risk and economic risk, and their robustness in relation to uncertainties. We found that a change of policy away from conventional embankments towards gaining control over the flooding process by making the embankments unbreachable is attractive. By thus influencing exposure to flooding, the fatality risk can be effectively reduced at even lower net societal costs than by continuation of the present policy or by raising the protection standards where cost-effective.

  12. Security analysis and improvements to the PsychoPass method.

    PubMed

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  13. Challenges in the development of very high resolution Earth System Models for climate science

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun

    2017-04-01

    The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.

  14. High Performance Analytics with the R3-Cache

    NASA Astrophysics Data System (ADS)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  15. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  16. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  17. Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data

    PubMed Central

    Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick

    2017-01-01

    Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613

  18. Automated detection and cataloging of global explosive volcanism using the International Monitoring System infrasound network

    NASA Astrophysics Data System (ADS)

    Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars

    2017-04-01

    We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.

  19. Axicons, prisms and integrators: searching for simple laser beam shaping solutions

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd

    2010-08-01

    Over the last thirty five years there have been many papers presented at numerous conferences and published within a host of optical journals. What is presented in many cases is either too exotic or technically challenging in practical application terms and it could be said both are testaments to the imagination of engineers and researchers. For many brute force laser processing applications such as paint stripping, large area ablation or general skiving of flex circuits, the opportunity to use a beam shaper that is inexpensive is a welcomed tool. Shaping the laser beam for less demanding applications, provides for a more uniform removal rate and increases the overall quality of the part being processed. It is a well known fact customers like their parts to look good. Many times, complex optical beam shaping techniques are considered because no one is aware of the historical solutions that have been lost to the ages. These complex solutions can range in price from 10,000 to 60,000 and require many months to design and fabricate. This paper will provide an overview of various beam shaping techniques that are both elegant and simple in concept and design. Optical techniques using axicons, prisms and reflective integrators will be discussed in an overview format.

  20. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  1. Security Analysis and Improvements to the PsychoPass Method

    PubMed Central

    2013-01-01

    Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458

  2. The Taming of the Shrew

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, M.

    1996-11-01

    Considering the extreme complexity of the turbulence problem in general and the unattainability of first-principles analytical solutions in particular, it is not surprising that controlling a turbulent flow remains a challenging task, mired in empiricism and unfulfilled promises and aspirations. Brute force suppression, or taming, of turbulence via active control strategies is always possible, but the penalty for doing so often exceeds any potential savings. The artifice is to achieve a desired effect with minimum energy expenditure. Spurred by the recent developments in chaos control, microfabrication and neural networks, efficient reactive control of turbulent flows, where the control input is optimally adjusted based on feedforward or feedback measurements, is now in the realm of the possible for future practical devices. But regardless of how the problem is approached, combating turbulence is always as arduous as the taming of the shrew. The former task will be emphasized during the oral presentation, but for this abstract we reflect on a short verse from the latter. From William Shakespeare's The Taming of the Shrew. Curtis (Petruchio's servant, in charge of his country house): Is she so hot a shrew as she's reported? Grumio (Petruchio's personal lackey): She was, good Curtis, before this frost. But thou know'st winter tames man, woman, and beast; for it hath tamed my old master, and my new mistress, and myself, fellow Curtis.

  3. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  4. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  5. Unsteady steady-states: Central causes of unintentional force drift

    PubMed Central

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely, the fingertip referent coordinate (RFT) and its apparent stiffness (CFT). The system's state is defined by a point in the {RFT; CFT} space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback, and attempted to maintain this force for 15 s after the feedback was removed. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of RFT and CFT showed that force drop was mostly due to a drift in RFT towards the actual fingertip position. Three analysis techniques, namely, hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong co-variation in RFT and CFT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {RFT; CFT} relative to their average trends also displayed covariation. On the whole the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system towards a low-energy state, and (b) a faster synergic motion of RFT and CFT that tends to stabilize the output fingertip force about the slowly-drifting equilibrium point. PMID:27540726

  6. Unsteady steady-states: central causes of unintentional force drift.

    PubMed

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M; Latash, Mark L

    2016-12-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely the fingertip referent coordinate (R FT ) and its apparent stiffness (C FT ). The system's state is defined by a point in the {R FT ; C FT } space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback and attempted to maintain this force for 15 s after the feedback was removed. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of R FT and C FT showed that force drop was mostly due to a drift in R FT toward the actual fingertip position. Three analysis techniques, namely hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong covariation in R FT and C FT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {R FT ; C FT } relative to their average trends also displayed covariation. On the whole, the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system toward a low-energy state and (b) a faster synergic motion of R FT and C FT that tends to stabilize the output fingertip force about the slowly drifting equilibrium point.

  7. Navier-Stokes predictions of pitch damping for axisymmetric shell using steady coning motion

    NASA Technical Reports Server (NTRS)

    Weinacht, Paul; Sturek, Walter B.; Schiff, Lewis B.

    1991-01-01

    Previous theoretical investigations have proposed that the side force and moment acting on a body of revolution in steady coning motion could be related to the pitch-damping force and moment. In the current research effort, this approach is applied to produce predictions of the pitch damping for axisymmetric shell. The flow fields about these projectiles undergoing steady coning motion are successfully computed using a parabolized Navier-Stokes computational approach which makes use of a rotating coordinate frame. The governing equations are modified to include the centrifugal and Coriolis force terms due to the rotating coordinate frame. From the computed flow field, the side moments due to coning motion, spinning motion, and combined spinning and coning motion are used to determine the pitch-damping coefficients. Computations are performed for two generic shell configurations, a secant-ogive-cylinder and a secant-ogive-cylinder-boattail.

  8. Mass effects and internal space geometry in triatomic reaction dynamics

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang S.; Marsden, Jerrold E.

    2006-05-01

    The effect of the distribution of mass in triatomic reaction dynamics is analyzed using the geometry of the associated internal space. Atomic masses are appropriately incorporated into internal coordinates as well as the associated non-Euclidean internal space metric tensor after a separation of the rotational degrees of freedom. Because of the non-Euclidean nature of the metric in the internal space, terms such as connection coefficients arise in the internal equations of motion, which act as velocity-dependent forces in a coordinate chart. By statistically averaging these terms, an effective force field is deduced, which accounts for the statistical tendency of geodesics in the internal space. This force field is shown to play a crucial role in determining mass-related branching ratios of isomerization and dissociation dynamics of a triatomic molecule. The methodology presented can be useful for qualitatively predicting branching ratios in general triatomic reactions, and may be applied to the study of isotope effects.

  9. On position/force tracking control problem of cooperative robot manipulators using adaptive fuzzy backstepping approach.

    PubMed

    Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz

    2017-09-01

    In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Tension (re)builds: Biophysical mechanisms of embryonic wound repair.

    PubMed

    Zulueta-Coarasa, Teresa; Fernandez-Gonzalez, Rodrigo

    2017-04-01

    Embryonic tissues display an outstanding ability to rapidly repair wounds. Epithelia, in particular, serve as protective layers that line internal organs and form the skin. Thus, maintenance of epithelial integrity is of utmost importance for animal survival, particularly at embryonic stages, when an immune system has not yet fully developed. Rapid embryonic repair of epithelial tissues is conserved across species, and involves the collective migration of the cells around the wound. The migratory cell behaviours associated with wound repair require the generation and transmission of mechanical forces, not only for the cells to move, but also to coordinate their movements. Here, we review the forces involved in embryonic wound repair. We discuss how different force-generating structures are assembled at the molecular level, and the mechanisms that maintain the balance between force-generating structures as wounds close. Finally, we describe the mechanisms that cells use to coordinate the generation of mechanical forces around the wound. Collective cell movements and their misregulation have been associated with defective tissue repair, developmental abnormalities and cancer metastasis. Thus, we propose that understanding the role of mechanical forces during embryonic wound closure will be crucial to develop therapeutic interventions that promote or prevent collective cell movements under pathological conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    PubMed

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  12. Calculation of absolute protein-ligand binding free energy using distributed replica sampling

    NASA Astrophysics Data System (ADS)

    Rodinger, Tomas; Howell, P. Lynne; Pomès, Régis

    2008-10-01

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  13. Anticipatory planning and control of grasp positions and forces for dexterous two-digit manipulation.

    PubMed

    Fu, Qiushi; Zhang, Wei; Santello, Marco

    2010-07-07

    Dexterous object manipulation requires anticipatory control of digit positions and forces. Despite extensive studies on sensorimotor learning of digit forces, how humans learn to coordinate digit positions and forces has never been addressed. Furthermore, the functional role of anticipatory modulation of digit placement to object properties remains to be investigated. We addressed these questions by asking human subjects (12 females, 12 males) to grasp and lift an inverted T-shaped object using precision grip at constrained or self-chosen locations. The task requirement was to minimize object roll during lift. When digit position was not constrained, subjects could have implemented many equally valid digit position-force coordination patterns. However, choice of digit placement might also have resulted in large trial-to-trial variability of digit position, hence challenging the extent to which the CNS could have relied on sensorimotor memories for anticipatory control of digit forces. We hypothesized that subjects would modulate digit placement for optimal force distribution and digit forces as a function of variable digit positions. All subjects learned to minimize object roll within the first three trials, and the unconstrained device was associated with significantly smaller grip forces but larger variability of digit positions. Importantly, however, digit load force modulation compensated for position variability, thus ensuring consistent object roll minimization on each trial. This indicates that subjects learned object manipulation by integrating sensorimotor memories with sensory feedback about digit positions. These results are discussed in the context of motor equivalence and sensorimotor integration of grasp kinematics and kinetics.

  14. Vibrational quasi-degenerate perturbation theory with optimized coordinates: Applications to ethylene and trans-1,3-butadiene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yagi, Kiyoshi, E-mail: kiyoshi.yagi@riken.jp; Otaki, Hiroki

    A perturbative extension to optimized coordinate vibrational self-consistent field (oc-VSCF) is proposed based on the quasi-degenerate perturbation theory (QDPT). A scheme to construct the degenerate space (P space) is developed, which incorporates degenerate configurations and alleviates the divergence of perturbative expansion due to localized coordinates in oc-VSCF (e.g., local O–H stretching modes of water). An efficient configuration selection scheme is also implemented, which screens out the Hamiltonian matrix element between the P space configuration (p) and the complementary Q space configuration (q) based on a difference in their quantum numbers (λ{sub pq} = ∑{sub s}|p{sub s} − q{sub s}|). Itmore » is demonstrated that the second-order vibrational QDPT based on optimized coordinates (oc-VQDPT2) smoothly converges with respect to the order of the mode coupling, and outperforms the conventional one based on normal coordinates. Furthermore, an improved, fast algorithm is developed for optimizing the coordinates. First, the minimization of the VSCF energy is conducted in a restricted parameter space, in which only a portion of pairs of coordinates is selectively transformed. A rational index is devised for this purpose, which identifies the important coordinate pairs to mix from others that may remain unchanged based on the magnitude of harmonic coupling induced by the transformation. Second, a cubic force field (CFF) is employed in place of a quartic force field, which bypasses intensive procedures that arise due to the presence of the fourth-order force constants. It is found that oc-VSCF based on CFF together with the pair selection scheme yields the coordinates similar in character to the conventional ones such that the final vibrational energy is affected very little while gaining an order of magnitude acceleration. The proposed method is applied to ethylene and trans-1,3-butadiene. An accurate, multi-resolution potential, which combines the MP2 and coupled-cluster with singles, doubles, and perturbative triples level of electronic structure theory, is generated and employed in the oc-VQDPT2 calculation to obtain the fundamental tones as well as selected overtones/combination tones coupled to the fundamentals through the Fermi resonance. The calculated frequencies of ethylene and trans-1,3-butadiene are found to be in excellent agreement with the experimental values with a mean absolute error of 8 and 9 cm{sup −1}, respectively.« less

  15. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  16. Operational Risk Preparedness: General George H. Thomas and the Franklin-Nashville Campaign

    DTIC Science & Technology

    2014-05-22

    monograph analyzes and compares thoughts on risk from multiple disciplines and viewpoints to develop a suitable definition and corresponding principles...sounds similar to Sun Tzu: " from the enemy’s character, from his institutions, the state of his affairs and his general situation, each side, using...changes through brute strength, but do not gain from change, they merely continue to exist. He therefore introduced the term antifragile—a system that

  17. Keep meaning in conversational coordination

    PubMed Central

    Cuffari, Elena C.

    2014-01-01

    Coordination is a widely employed term across recent quantitative and qualitative approaches to intersubjectivity, particularly approaches that give embodiment and enaction central explanatory roles. With a focus on linguistic and bodily coordination in conversational contexts, I review the operational meaning of coordination in recent empirical research and related theorizing of embodied intersubjectivity. This discussion articulates what must be involved in treating linguistic meaning as dynamic processes of coordination. The coordination approach presents languaging as a set of dynamic self-organizing processes and actions on multiple timescales and across multiple modalities that come about and work in certain domains (those jointly constructed in social, interactive, high-order sense-making). These processes go beyond meaning at the level that is available to first-person experience. I take one crucial consequence of this to be the ubiquitously moral nature of languaging with others. Languaging coordinates experience, among other levels of behavior and event. Ethical effort is called for by the automatic autonomy-influencing forces of languaging as coordination. PMID:25520693

  18. Report of the Task Force for Improved Coordination of the DoD Science and Technology Program. Volume 2. Reports of the Working Groups. Working Group A: Strategic Planning. Working Group B: Program Coordination. Working Group C: Advocacy

    DTIC Science & Technology

    1988-08-01

    OperabllY 19 Technolofy Area Summaries 20 Major Technology Thrws 21 Air Force S&T Investment Summary 25 Program Objectives 28 Glcazy 30 1. D-6 TH~E...8217lRI-TAC Advrane Plannzn Sy-i Mulima Radio AWAM3 IRP JSTARS fris MmAvne Anhn ABOCC 37=6 Comb !dftica~ S~ Surance Radar Ewm EAVZ SYNC Media . R~u... Social Sciences 5001 Eisenhower Avenue Alexandria VA 22333-5600 Col. Harry G. Dangerfield Telephone: (301) 663-7443 Executive Assistant to the PEO for

  19. Quantum mechanics in noninertial reference frames: Relativistic accelerations and fictitious forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klink, W.H., E-mail: william-klink@uiowa.edu; Wickramasekara, S., E-mail: wickrama@grinnell.edu

    2016-06-15

    One-particle systems in relativistically accelerating reference frames can be associated with a class of unitary representations of the group of arbitrary coordinate transformations, an extension of the Wigner–Bargmann definition of particles as the physical realization of unitary irreducible representations of the Poincaré group. Representations of the group of arbitrary coordinate transformations become necessary to define unitary operators implementing relativistic acceleration transformations in quantum theory because, unlike in the Galilean case, the relativistic acceleration transformations do not themselves form a group. The momentum operators that follow from these representations show how the fictitious forces in noninertial reference frames are generated inmore » quantum theory.« less

  20. Fast equilibration protocol for million atom systems of highly entangled linear polyethylene chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sliozberg, Yelena R.; TKC Global, Inc., Aberdeen Proving Ground, Maryland 21005; Kröger, Martin

    Equilibrated systems of entangled polymer melts cannot be produced using direct brute force equilibration due to the slow reptation dynamics exhibited by high molecular weight chains. Instead, these dense systems are produced using computational techniques such as Monte Carlo-Molecular Dynamics hybrid algorithms, though the use of soft potentials has also shown promise mainly for coarse-grained polymeric systems. Through the use of soft-potentials, the melt can be equilibrated via molecular dynamics at intermediate and long length scales prior to switching to a Lennard-Jones potential. We will outline two different equilibration protocols, which use various degrees of information to produce the startingmore » configurations. In one protocol, we use only the equilibrium bond angle, bond length, and target density during the construction of the simulation cell, where the information is obtained from available experimental data and extracted from the force field without performing any prior simulation. In the second protocol, we moreover utilize the equilibrium radial distribution function and dihedral angle distribution. This information can be obtained from experimental data or from a simulation of short unentangled chains. Both methods can be used to prepare equilibrated and highly entangled systems, but the second protocol is much more computationally efficient. These systems can be strictly monodisperse or optionally polydisperse depending on the starting chain distribution. Our protocols, which utilize a soft-core harmonic potential, will be applied for the first time to equilibrate a million particle system of polyethylene chains consisting of 1000 united atoms at various temperatures. Calculations of structural and entanglement properties demonstrate that this method can be used as an alternative towards the generation of entangled equilibrium structures.« less

  1. Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc

    Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.

  2. Telerehabilitation for Veterans with Combat Related TBI/PTSD

    DTIC Science & Technology

    2011-04-01

    unnecessary sheering forces and edema. Some patients with TBI also have SCI that may result in pressure ulcers . 1.1 Conceptual Model The conceptual...appointments with specialists, medication management and compliance, counseling, education and monitoring outcomes. The ARNP coordinates care for TBI...coordination Drug therapy: Drugs are frequently used in the management common of complications of polytrauma such as TBI particularly for mood

  3. The United States Special Operations Command Civil Military Engagement Program - A Model for Military-Interagency Low Cost / Small Footprint Activities

    DTIC Science & Technology

    2014-05-02

    Interagency Coordination Centers (JIACs), Interagency Task Forces ( IATFs ) are found within GCCs and subordinate military units in an attempt to bridge...Interagency Tasks Forces ( IATFs ) that exist at each Geographic Combatant Command (GCC). Rather, this chapter serves to highlight the Civil Military

  4. Air Force Air Refueling for Naval Operations: History, Practice, and Recommendations

    DTIC Science & Technology

    1990-08-01

    Air Force Air Refueling 0 ELECTE C4 N 910U for BU Naval Operations History, Practice, and Recommendations UtMON STA~IMENT X [ Apov.e ,opu,. Lt Col...as three hose reels provide redundancy over just one. 13. Be used in coordination with carriler -launched buddy tankers, there- by providing the

  5. Object Representation in Infants' Coordination of Manipulative Force

    ERIC Educational Resources Information Center

    Mash, Clay

    2007-01-01

    This study examined infants' use of object knowledge for scaling the manipulative force of object-directed actions. Infants 9, 12, and 15 months of age were outfitted with motion-analysis sensors on their arms and then presented with stimulus objects to examine individually over a series of familiarization trials. Two stimulus objects were used in…

  6. Application of Virtual World Technologies to Undersea Warfare Learning

    DTIC Science & Technology

    2009-08-20

    Prescribed by ANSI Std Z39-18 Virtual World Technologies (VWTs) “Using Virtual Worlds To Shape the Future” by Dr. Susan U. Stucky, IBM Almaden Research...JUAN HSI US MIL ANDS AIR FORCE USS SKIPJACK11 Beavertail Lighthouse iTP TP ObsF ObsE Coordinated Military Presence NAVY AIR FORCE ARMYMARINES & OTHER

  7. Defining College-Level Skills. Report of the Task Force on Definition of College-Level Skills.

    ERIC Educational Resources Information Center

    Minnesota Higher Education Coordinating Board, St. Paul.

    Recommendations concerning the reading, writing, and mathematics skills that are needed by students entering degree programs in Minnesota postsecondary institutions are offered by a Minnesota Higher Education Coordinating Board task force. In addition to describing reading skills that students need for most college degree programs, conditions…

  8. 3 CFR 13540 - Executive Order 13540 of April 26, 2010. Interagency Task Force on Veterans Small Business...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of America, including section 102 of title I of the Military Reservist and Veteran Small Business...) the General Services Administration; and (b) four representatives from a veterans' service or military... and military organizations in performing the duties of the Task Force; (b) coordinate administrative...

  9. Assessing the California Transfer Function: The Transfer Rate and Its Measurement. Conclusions of the Data Needs Task Force.

    ERIC Educational Resources Information Center

    Intersegmental Coordinating Council, Sacramento, CA.

    In the fall of 1989, the Intersegmental Coordinating Council organized the Data Needs Task Force (DNTF) to determine the feasibility of establishing a transfer rate definition. Specifically, the DNTF was charged with defining the information needed to strengthen intersegmental transfer programs, establishing common definitions (including…

  10. Characteristic Boundary Conditions for ARO-1

    DTIC Science & Technology

    1983-05-01

    I As shown in Fig. 3, the point designated II is the interior point that was used to define the barred coordinate system , evaluated at time t=. All...L. Jacocks Calspan Field Services, Inc. May 1983 Final Report for Period October 1981 - September 1982 r Approved for public release; destribut ...on unlimited I ARNOLD ENGINEERING DEVELOPMENT CENTER ARNOLD AIR FORCE STATION, TENNESSEE AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE N O T I

  11. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  12. Is muscle coordination affected by loading condition in ballistic movements?

    PubMed

    Giroux, Caroline; Guilhem, Gaël; Couturier, Antoine; Chollet, Didier; Rabita, Giuseppe

    2015-02-01

    This study aimed to investigate the effect of loading on lower limb muscle coordination involved during ballistic squat jumps. Twenty athletes performed ballistic squat jumps on a force platform. Vertical force, velocity, power and electromyographic (EMG) activity of lower limb muscles were recorded during the push-off phase and compared between seven loading conditions (0-60% of the concentric-only maximal repetition). The increase in external load increased vertical force (from 1962 N to 2559 N; P=0.0001), while movement velocity decreased (from 2.5 to 1.6 ms(-1); P=0.0001). EMG activity of tibialis anterior first peaked at 5% of the push-off phase, followed by gluteus maximus (35%), vastus lateralis and soleus (45%), rectus femoris (55%), gastrocnemius lateralis (65%) and semitendinosus (75%). This sequence of activation (P=0.67) and the amplitude of muscle activity (P=0.41) of each muscle were not affected by loading condition. However, a main effect of muscle was observed on these parameters (peak value: P<0.001; peak occurrence: P=0.02) illustrating the specific role of each muscle during the push-off phase. Our findings suggest that muscle coordination is not influenced by external load during a ballistic squat jump. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. On the correct representation of bending and axial deformation in the absolute nodal coordinate formulation with an elastic line approach

    NASA Astrophysics Data System (ADS)

    Gerstmayr, Johannes; Irschik, Hans

    2008-12-01

    In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.

  14. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  15. Computation of Relative Magnetic Helicity in Spherical Coordinates

    NASA Astrophysics Data System (ADS)

    Moraitis, Kostas; Pariat, Étienne; Savcheva, Antonia; Valori, Gherardo

    2018-06-01

    Magnetic helicity is a quantity of great importance in solar studies because it is conserved in ideal magnetohydrodynamics. While many methods for computing magnetic helicity in Cartesian finite volumes exist, in spherical coordinates, the natural coordinate system for solar applications, helicity is only treated approximately. We present here a method for properly computing the relative magnetic helicity in spherical geometry. The volumes considered are finite, of shell or wedge shape, and the three-dimensional magnetic field is considered to be fully known throughout the studied domain. Testing of the method with well-known, semi-analytic, force-free magnetic-field models reveals that it has excellent accuracy. Further application to a set of nonlinear force-free reconstructions of the magnetic field of solar active regions and comparison with an approximate method used in the past indicates that the proposed method can be significantly more accurate, thus making our method a promising tool in helicity studies that employ spherical geometry. Additionally, we determine and discuss the applicability range of the approximate method.

  16. Haptic communication between humans is tuned by the hard or soft mechanics of interaction

    PubMed Central

    Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne

    2018-01-01

    To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966

  17. The Battle of Sukchon-Sunchon: Defensive, Encircled Forces; Allied Forces: 187th Airborne, RCT. Enemy Forces: North Korean, 239th RGT, 20-25 October 1950

    DTIC Science & Technology

    1984-05-23

    morale of the troops. 5 2. Leadership. The commander, executive officer, and fire support coordinator jumped from the first plane of the first serial. On...evening of 21 October 1950, from 2400 hours until 0400 hoursthe mortar platoon of the Support Company was forced to cease firing because of low...only one fatality, which was caused by enemy fire .7 5 The Medical Company immediately started collecting Medical bundles and caring for those injured in

  18. Force transmissibility versus displacement transmissibility

    NASA Astrophysics Data System (ADS)

    Lage, Y. E.; Neves, M. M.; Maia, N. M. M.; Tcherniak, D.

    2014-10-01

    It is well-known that when a single-degree-of-freedom (sdof) system is excited by a continuous motion of the foundation, the force transmissibility, relating the force transmitted to the foundation to the applied force, equals the displacement transmissibility. Recent developments in the generalization of the transmissibility to multiple-degree-of-freedom (mdof) systems have shown that similar simple and direct relations between both types of transmissibility do not appear naturally from the definitions, as happens in the sdof case. In this paper, the authors present their studies on the conditions under which it is possible to establish a relation between force transmissibility and displacement transmissibility for mdof systems. As far as the authors are aware, such a relation is not currently found in the literature, which is justified by being based on recent developments in the transmissibility concept for mdof systems. Indeed, it does not appear naturally, but the authors observed that the needed link is present when the displacement transmissibility is obtained between the same coordinates where the applied and reaction forces are considered in the force transmissibility case; this implies that the boundary conditions are not exactly the same and instead follow some rules. This work presents a formal derivation of the explicit relation between the force and displacement transmissibilities for mdof systems, and discusses its potential and limitations. The authors show that it is possible to obtain the displacement transmissibility from measured forces, and the force transmissibility from measured displacements, opening new perspectives, for example, in the identification of applied or transmitted forces. With this novel relation, it becomes possible, for example, to estimate the force transmissibility matrix with the structure off its supports, in free boundary conditions, and without measuring the forces. As far as force identification is concerned, this novel approach significantly decreases the computational effort when compared to conventional approaches, as it requires only local information of the sets of coordinates involved. Numerical simulations and experimental examples are presented and discussed, to illustrate the proposed developments.

  19. Multi-finger synergies and the muscular apparatus of the hand.

    PubMed

    Cuadra, Cristian; Bartsch, Angelo; Tiemann, Paula; Reschechtko, Sasha; Latash, Mark L

    2018-05-01

    We explored whether the synergic control of the hand during multi-finger force production tasks depends on the hand muscles involved. Healthy subjects performed accurate force production tasks and targeted force pulses while pressing against loops positioned at the level of fingertips, middle phalanges, and proximal phalanges. This varied the involvement of the extrinsic and intrinsic finger flexors. The framework of the uncontrolled manifold (UCM) hypothesis was used to analyze the structure of inter-trial variance, motor equivalence, and anticipatory synergy adjustments prior to the force pulse in the spaces of finger forces and finger modes (hypothetical finger-specific control signals). Subjects showed larger maximal force magnitudes at the proximal site of force production. There were synergies stabilizing total force during steady-state phases across all three sites of force production; no differences were seen across the sites in indices of structure of variance, motor equivalence, or anticipatory synergy adjustments. Indices of variance, which did not affect the task (within the UCM), correlated with motor equivalent motion between the steady states prior to and after the force pulse; in contrast, variance affecting task performance did not correlate with non-motor equivalent motion. The observations are discussed within the framework of hierarchical control with referent coordinates for salient effectors at each level. The findings suggest that multi-finger synergies are defined at the level of abundant transformation between the low-dimensional hand level and higher dimensional finger level while being relatively immune to transformations between the finger level and muscle level. The results also support the scheme of control with two classes of neural variables that define referent coordinates and gains in back-coupling loops between hierarchical control levels.

  20. Modular Organization of Exploratory Force Development Under Isometric Conditions in the Human Arm.

    PubMed

    Roh, Jinsook; Lee, Sang Wook; Wilger, Kevin D

    2018-01-31

    Muscle coordination of isometric force production can be explained by a smaller number of modules. Variability in force output, however, is higher during exploratory/transient force development phases than force maintenance phase, and it is not clear whether the same modular structure underlies both phases. In this study, eight neurologically-intact adults isometrically performed target force matches in 54 directions at hands, and electromyographic (EMG) data from eight muscles were parsed into four sequential phases. Despite the varying degree of motor complexity across phases (significant between-phase differences in EMG-force correlation, angular errors, and between-force correlations), the number/composition of motor modules were found equivalent across phases, suggesting that the CNS systematically modulated activation of the same set of motor modules throughout sequential force development.

  1. From brute luck to option luck? On genetics, justice, and moral responsibility in reproduction.

    PubMed

    Denier, Yvonne

    2010-04-01

    The structure of our ethical experience depends, crucially, on a fundamental distinction between what we are responsible for doing or deciding and what is given to us. As such, the boundary between chance and choice is the spine of our conventional morality, and any serious shift in that boundary is thoroughly dislocating. Against this background, I analyze the way in which techniques of prenatal genetic diagnosis (PGD) pose such a fundamental challenge to our conventional ideas of justice and moral responsibility. After a short description of the situation, I first examine the influential luck egalitarian theory of justice, which is based on the distinction between choice and luck or, more specifically, between option luck and brute luck, and the way in which it would approach PGD (section II), followed by an analysis of the conceptual incoherencies (in section III) and moral problems (in section IV) that come with such an approach. Put shortly, the case of PGD shows that the luck egalitarian approach fails to express equal respect for the individual choices of people. The paradox of the matter is that by overemphasizing the fact of choice as such, without regard for the social framework in which they are being made, or for the fundamental and existential nature of particular choices-like choosing to have children and not to undergo PGD or not to abort a handicapped fetus-such choices actually become impossible.

  2. Prehension of Half-Full and Half-Empty Glasses: Time and History Effects on Multi-Digit Coordination

    PubMed Central

    Sun, Yao; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2011-01-01

    We explored how digit forces and indices of digit coordination depend on the history of getting to a particular set of task parameters during static prehension tasks. The participants held in the right hand an instrumented handle with a light-weight container attached on top of the handle. At the beginning of each trial, the container could be empty, filled to the half with water (0.4 l) or filled to the top (0.8 l). The water was pumped in/out of the container at a constant, slow rate over 10 s. At the end of each trial, the participants always held a half-filled container that has just been filled (Empty-Half), emptied (Full-Half), or stayed half-filled throughout the trial (Half-Only). Indices of co-variation (synergy indices) of elemental variables (forces and moments of force produced by individual digits) stabilizing such performance variables as total normal force, total tangetial force, and total moment of force were computed at two levels of an assumed control hierarchy. At the upper level, the task is shared between the thumb and virtual finger (an imagined digit with the mechanical action equal to that of the four fingers), while at the lower level, action of the virtual finger is shared among the actual four fingers. Filling or emptying the container led to a drop in the safety margin (proportion of grip force over the slipping threshold) below the values observed in the Half-Only condition. Synergy indices at both levels of the hierarchy showed changes over the Full-Half and Empty-Half condition. These changes could be monotonic (typical of moment of force and normal force) or non-monotonic (typical of tangential force). For both normal and tangential forces, higher synergy indices at the higher level of the hierarchy corresponded to lower indices at the lower level. Significant differences in synergy indices across conditions were seen at the final steady-state showing that digit coordination during steady holding an object is history dependent. The observations support an earlier hypothesis on a trade-off between synergies at the two levels of a hierarchy. They also suggest that, when a change in task parameters is expected, the neural strategy may involve producing less stable (easier to change) actions. The results suggest that synergy indices may be highly sensitive to changes in a task variable and that effects of such changes persist after the changes are over. PMID:21331525

  3. Coordination and Data Management of the International Arctic Buoy Programme (IABP)

    DTIC Science & Technology

    2002-09-30

    for forcing, validation and assimilation into numerical climate models , and for forecasting weather and ice conditions. TRANSITIONS Using IABP ...Coordination and Data Management of the International Arctic Buoy Programme ( IABP ) Ignatius G. Rigor 1013 NE 40th Street Polar Science Center...analyzed geophysical fields. APPROACH The IABP is a collaboration between 25 different institutions from 8 different countries, which work together

  4. U. S. GODAE: Global Ocean Prediction with the HYbrid Coordinate Ocean Model

    DTIC Science & Technology

    2009-01-01

    2008). There are three major contributors to the strength of the Gulf Stream, (1) the wind forcing, (2) the Atlantic meridional overturning ...Smith, 2007. Resolution convergence and sensitivity studies with North Atlantic circulation models. Part I. The western boundary current system...σ-z coordinates, and (3) a baroclinic version of ADvanced CIRCulation (ADCIRC), the latter an unstructured grid model for baroclinic coastal

  5. Is the Unfolding of the Group Discussion Off-Pattern? Improving Coordination Support in Educational Forums Using Mobile Devices

    ERIC Educational Resources Information Center

    Gerosa, Marco Aurelio; Filippo, Denise; Pimentel, Mariano; Fuks, Hugo; Lucena, Carlos J. P.

    2010-01-01

    A forum is a valuable tool to foster reflection in an in-depth discussion; however, it forces the course mediator to continually pay close attention in order to coordinate learners' activities. Moreover, monitoring a forum is time consuming given that it is impossible to know in advance when new messages are going to be posted. Additionally, a…

  6. Assessment of DoD Wounded Warrior Matters -- Camp Lejeune

    DTIC Science & Technology

    2012-03-30

    steadfast to serve the total Wounded, Ill and Injured ( WII ) force: active duty, reserve, retired, and veteran Marines.” Wounded Warrior...to a Physical Evaluation Board . 16 During our site visit, we observed a 9-Block meeting, which was chaired by the WWBn-East Executive...Support Coordinator Medical Case Managers (Naval Hospital) Recovery Care Coordinators Medical Board Clerk The Medical Case Management Advisor

  7. A Simulation Study of the Overdetermined Geodetic Boundary Value Problem Using Collocation

    DTIC Science & Technology

    1989-03-01

    9 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED GEOPHYSICS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE...linearized integral equation is obtained through an infinite system of integral equations which is solved step by step by means of Stokes’ function. The...computed. Since 9 and W = W(9) 4 are known on the boundary, then the boundary is known in the new coordinate system . The serious disadvantage of this

  8. A Women-Only Comparison of the U.S. Air Force Fitness Test and the Marine Combat Fitness Test

    DTIC Science & Technology

    2012-03-01

    Air Force established the Fitness Assessment Cell to conduct fitness assessments for all Air Force members and to encourage standardization in...objective. “The MCFT was specifically designed to evaluate strength, stamina , agility and coordination as well as overall anaerobic capacity” (Department...1308.1, “Service members must possess stamina and strength to perform, successfully, any mission,” and that “…each service develops a quality 78

  9. Sample positioning in microgravity

    NASA Technical Reports Server (NTRS)

    Sridharan, Govind (Inventor)

    1991-01-01

    Repulsion forces arising from laser beams are provided to produce mild positioning forces on a sample in microgravity vacuum environments. The system of the preferred embodiment positions samples using a plurality of pulsed lasers providing opposing repulsion forces. The lasers are positioned around the periphery of a confinement area and expanded to create a confinement zone. The grouped laser configuration, in coordination with position sensing devices, creates a feedback servo whereby stable position control of a sample within microgravity environment can be achieved.

  10. Sample positioning in microgravity

    NASA Technical Reports Server (NTRS)

    Sridharan, Govind (Inventor)

    1993-01-01

    Repulsion forces arising from laser beams are provided to produce mild positioning forces on a sample in microgravity vacuum environments. The system of the preferred embodiment positions samples using a plurality of pulsed lasers providing opposing repulsion forces. The lasers are positioned around the periphery of a confinement area and expanded to create a confinement zone. The grouped laser configuration, in coordination with position sensing devices, creates a feedback servo whereby stable position control of a sample within microgravity environment can be achieved.

  11. Determination of Ammunition Training Rates for Marine Forces Study. Volume 2.

    DTIC Science & Technology

    1983-09-17

    ARD-fl44 430 DETERMINATION OF AMMUNITION TRAINING RATES FOR MARINE i/n FORCES STUDY VOLUME 2(U) MARINE CORPS DEVELOPMENT AND EDUCATION COMMAND... STUDY - VOL II LIEUTENANT COLONEL R. J. YEOMAN C) DEPUTY CHIEF OF STAFF FOR DEVELOPMENTAL COORDINATION MY) DEVELOPMENT CENTER ’ MARINE CORPS...MARINE FORCES STUDY , DECISION S. PERFORMING ORG. REPORT NUMBER MEMORANDUM 7. AUTHOR(#) S. CONTR 4T9M GRANT NUMUER(s) M 00027- -G-0 060 LtCol R. J

  12. Sensor Prototype to Evaluate the Contact Force in Measuring with Coordinate Measuring Arms

    PubMed Central

    Cuesta, Eduardo; Telenti, Alejandro; Patiño, Hector; González-Madruga, Daniel; Martínez-Pellitero, Susana

    2015-01-01

    This paper describes the design, development and evaluation tests of an integrated force sensor prototype for portable Coordinate Measuring Arms (CMAs or AACMMs). The development is based on the use of strain gauges located on the surface of the CMAs’ hard probe. The strain gauges as well as their cables and connectors have been protected with a custom case, made by Additive Manufacturing techniques (Polyjet 3D). The same method has been selected to manufacture an ergonomic handle that includes trigger mechanics and the electronic components required for synchronizing the trigger signal when probing occurs. The paper also describes the monitoring software that reads the signals in real time, the calibration procedure of the prototype and the validation tests oriented towards increasing knowledge of the forces employed in manual probing. Several experiments read and record the force in real time comparing different ways of probing (discontinuous and continuous contact) and measuring different types of geometric features, from single planes to exterior cylinders, cones, or spheres, through interior features. The probing force is separated into two components allowing the influence of these strategies in probe deformation to be known. The final goal of this research is to improve the probing technique, for example by using an operator training programme, allowing extra-force peaks and bad contacts to be minimized or just to avoid bad measurements. PMID:26057038

  13. Robustness of muscle synergies underlying three-dimensional force generation at the hand in healthy humans

    PubMed Central

    Rymer, William Z.; Beer, Randall F.

    2012-01-01

    Previous studies using advanced matrix factorization techniques have shown that the coordination of human voluntary limb movements may be accomplished using combinations of a small number of intermuscular coordination patterns, or muscle synergies. However, the potential use of muscle synergies for isometric force generation has been evaluated mostly using correlational methods. The results of such studies suggest that fixed relationships between the activations of pairs of muscles are relatively rare. There is also emerging evidence that the nervous system uses independent strategies to control movement and force generation, which suggests that one cannot conclude a priori that isometric force generation is accomplished by combining muscle synergies, as shown in movement control. In this study, we used non-negative matrix factorization to evaluate the ability of a few muscle synergies to reconstruct the activation patterns of human arm muscles underlying the generation of three-dimensional (3-D) isometric forces at the hand. Surface electromyographic (EMG) data were recorded from eight key elbow and shoulder muscles during 3-D force target-matching protocols performed across a range of load levels and hand positions. Four synergies were sufficient to explain, on average, 95% of the variance in EMG datasets. Furthermore, we found that muscle synergy composition was conserved across biomechanical task conditions, experimental protocols, and subjects. Our findings are consistent with the view that the nervous system can generate isometric forces by assembling a combination of a small number of muscle synergies, differentially weighted according to task constraints. PMID:22279190

  14. The Relationship Between Sports Participation and Managerial Behavior: An Exploratory Study

    DTIC Science & Technology

    1986-09-01

    Response ...... .......................... 44 Analysis ........................................ 45 T-Test Decision Criteria ................. 45...magnitude and the limited availability of Air Force resources managed by its officer corp. Air Force officers are charged with the responsibility and S...successful organization, the SPO requires careful definition of authority and responsibility as well as strenuous efforts toward coordination, teamwork and

  15. 75 FR 51287 - Agreements in Force as of December 31, 2009, Between the American Institute in Taiwan and the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-19

    ... concerning the change in the name of the Coordination Council for North American Affairs (CCNAA) to the... Taipei American School, with annex. Signed at Taipei February 3, 1983. Entered into force February 3... implementation of the 1969 international convention on tonnage measurement. Effected by exchange of letters at...

  16. Projectile Motion with a Drag Force: Were the Medievals Right After All?

    ERIC Educational Resources Information Center

    La Rocca, Paola; Riggi, Francesco

    2009-01-01

    An educational and historical study of the projectile motion with drag forces dependent on speed shows, by simple results, that trajectories quite similar to those depicted before the Galilean era may be obtained with a realistic choice of quantities involved. Numerical simulations of the trajectory in space and velocity coordinates help us to…

  17. Dispersion forces play a role in (Me 2 IPr)Fe(NAd)R 2 (Ad = adamantyl; R = neo Pe, 1-nor) insertions and Fe–R bond dissociation enthalpies (BDEs)

    DOE PAGES

    Cundari, Thomas R.; Jacobs, Brian P.; MacMillan, Samantha N.; ...

    2018-01-01

    Calculations show that dispersion forces in four-coordinate (Me 2 IPr)Fe(NAd)(1-nor) 2 ( 2b ) contribute to greater D(FeR) and subtly slow its migratory insertion relative to the neopentyl analogue.

  18. Computer Solution of the Two-Dimensional Tether Ball: Problem to Illustrate Newton's Second Law.

    ERIC Educational Resources Information Center

    Zimmerman, W. Bruce

    Force diagrams involving angular velocity, linear velocity, centripetal force, work, and kinetic energy are given with related equations of motion expressed in polar coordinates. The computer is used to solve differential equations, thus reducing the mathematical requirements of the students. An experiment is conducted using an air table to check…

  19. Dispersion forces play a role in (Me 2 IPr)Fe(NAd)R 2 (Ad = adamantyl; R = neo Pe, 1-nor) insertions and Fe–R bond dissociation enthalpies (BDEs)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cundari, Thomas R.; Jacobs, Brian P.; MacMillan, Samantha N.

    Calculations show that dispersion forces in four-coordinate (Me 2 IPr)Fe(NAd)(1-nor) 2 ( 2b ) contribute to greater D(FeR) and subtly slow its migratory insertion relative to the neopentyl analogue.

  20. Toward a New Definition of Employability. Report by the North Central Indiana Workforce Literacy Task Force.

    ERIC Educational Resources Information Center

    Center for Remediation Design, Washington, DC.

    The North Central Indiana Workplace Literacy Initiative seeks to develop a curriculum management system addressing work force literacy needs and a coordinated human resource investment system meeting individual economic self-sufficiency needs and labor market needs. The workplace of the future will contain six key changes: employers will require…

  1. Alignment of cellular motility forces with tissue flow as a mechanism for efficient wound healing

    PubMed Central

    Basan, Markus; Elgeti, Jens; Hannezo, Edouard; Rappel, Wouter-Jan; Levine, Herbert

    2013-01-01

    Recent experiments have shown that spreading epithelial sheets exhibit a long-range coordination of motility forces that leads to a buildup of tension in the tissue, which may enhance cell division and the speed of wound healing. Furthermore, the edges of these epithelial sheets commonly show finger-like protrusions whereas the bulk often displays spontaneous swirls of motile cells. To explain these experimental observations, we propose a simple flocking-type mechanism, in which cells tend to align their motility forces with their velocity. Implementing this idea in a mechanical tissue simulation, the proposed model gives rise to efficient spreading and can explain the experimentally observed long-range alignment of motility forces in highly disordered patterns, as well as the buildup of tensile stress throughout the tissue. Our model also qualitatively reproduces the dependence of swirl size and swirl velocity on cell density reported in experiments and exhibits an undulation instability at the edge of the spreading tissue commonly observed in vivo. Finally, we study the dependence of colony spreading speed on important physical and biological parameters and derive simple scaling relations that show that coordination of motility forces leads to an improvement of the wound healing process for realistic tissue parameters. PMID:23345440

  2. Task-specific stability in muscle activation space during unintentional movements.

    PubMed

    Falaki, Ali; Towhidkhah, Farzad; Zhou, Tao; Latash, Mark L

    2014-11-01

    We used robot-generated perturbations applied during position-holding tasks to explore stability of induced unintentional movements in a multidimensional space of muscle activations. Healthy subjects held the handle of a robot against a constant bias force and were instructed not to interfere with hand movements produced by changes in the external force. Transient force changes were applied leading to handle displacement away from the initial position and then back toward the initial position. Intertrial variance in the space of muscle modes (eigenvectors in the muscle activations space) was quantified within two subspaces, corresponding to unchanged handle coordinate and to changes in the handle coordinate. Most variance was confined to the former subspace in each of the three phases of movement, the initial steady state, the intermediate position, and the final steady state. The same result was found when the changes in muscle activation were analyzed between the initial and final steady states. Changes in the dwell time between the perturbation force application and removal led to different final hand locations undershooting the initial position. The magnitude of the undershot scaled with the dwell time, while the structure of variance in the muscle activation space did not depend on the dwell time. We conclude that stability of the hand coordinate is ensured during both intentional and unintentional actions via similar mechanisms. Relative equifinality in the external space after transient perturbations may be associated with varying states in the redundant space of muscle activations. The results fit a hierarchical scheme for the control of voluntary movements with referent configurations and redundant mapping between the levels of the hierarchy.

  3. Intermanual transfer characteristics of dynamic learning: direction, coordinate frame, and consolidation of interlimb generalization

    PubMed Central

    Thürer, Benjamin; Focke, Anne; Stein, Thorsten

    2015-01-01

    Intermanual transfer, i.e., generalization of motor learning across hands, is a well-accepted phenomenon of motor learning. Yet, there are open questions regarding the characteristics of this transfer, particularly the intermanual transfer of dynamic learning. In this study, we investigated intermanual transfer in a force field adaptation task concerning the direction and the coordinate frame of transfer as well as the influence of a 24-h consolidation period on the transfer. We tested 48 healthy human subjects for transfer from dominant to nondominant hand, and vice versa. We considered two features of transfer. First, we examined transfer to the untrained hand using force channel trials that suppress error feedback and learning mechanisms to assess intermanual transfer in the form of a practice-dependent bias. Second, we considered transfer by exposing the subjects to the force field with the untrained hand to check for faster learning of the dynamics (interlimb savings). Half of the subjects were tested for transfer immediately after adaptation, whereas the other half were tested after a 24-h consolidation period. Our results showed intermanual transfer both from dominant to nondominant hand and vice versa in extrinsic coordinates. After the consolidation period, transfer effects were weakened. Moreover, the transfer effects were negligible compared with the subjects' ability to rapidly adapt to the force field condition. We conclude that intermanual transfer is a bidirectional phenomenon that vanishes with time. However, the ability to transfer motor learning seems to play a minor role compared with the rapid adaptation processes. PMID:26424581

  4. Cell-cell and cell-extracellular matrix adhesions cooperate to organize actomyosin networks and maintain force transmission during dorsal closure.

    PubMed

    Goodwin, Katharine; Lostchuck, Emily E; Cramb, Kaitlyn M L; Zulueta-Coarasa, Teresa; Fernandez-Gonzalez, Rodrigo; Tanentzapf, Guy

    2017-05-15

    Tissue morphogenesis relies on the coordinated action of actin networks, cell-cell adhesions, and cell-extracellular matrix (ECM) adhesions. Such coordination can be achieved through cross-talk between cell-cell and cell-ECM adhesions. Drosophila dorsal closure (DC), a morphogenetic process in which an extraembryonic tissue called the amnioserosa contracts and ingresses to close a discontinuity in the dorsal epidermis of the embryo, requires both cell-cell and cell-ECM adhesions. However, whether the functions of these two types of adhesions are coordinated during DC is not known. Here we analyzed possible interdependence between cell-cell and cell-ECM adhesions during DC and its effect on the actomyosin network. We find that loss of cell-ECM adhesion results in aberrant distributions of cadherin-mediated adhesions and actin networks in the amnioserosa and subsequent disruption of myosin recruitment and dynamics. Moreover, loss of cell-cell adhesion caused up-regulation of cell-ECM adhesion, leading to reduced cell deformation and force transmission across amnioserosa cells. Our results show how interdependence between cell-cell and cell-ECM adhesions is important in regulating cell behaviors, force generation, and force transmission critical for tissue morphogenesis. © 2017 Goodwin, Lostchuck, et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  5. Use of DoD Architectural Framework in Support of JFIIT Assessments

    DTIC Science & Technology

    2007-06-12

    ACCA ASCA Div FSCA BCT/Regt FSCA Bn FSCA TACP TACP TACP JFO/Observer Friendly Forces Air RCA OV-1 for TA 3.2.2 Conduct Close Air Support OV-1 for TA...3.2.2 Conduct Close Air Support Ground RCA ISR FSCA/ ACCA CAS Aircraft FAC(A) Indirect Surface Fires Hostile Targets WOC TACP GLO Legend ACCA Air...FAC(A)/CAS Aircrew A3.1.4 Control CAS A3.2.1 Coordinate with WOC/ ACCA /ASCA/ACA A3.2.2 Coordinate with JTAC A3.2.3 Provide CAS A3.3.1 Coordinate with

  6. A single particle model to simulate the dynamics of entangled polymer melts.

    PubMed

    Kindt, P; Briels, W J

    2007-10-07

    We present a computer simulation model of polymer melts representing each chain as one single particle. Besides the position coordinate of each particle, we introduce a parameter n(ij) for each pair of particles i and j within a specified distance from each other. These numbers, called entanglement numbers, describe the deviation of the system of ignored coordinates from its equilibrium state for the given configuration of the centers of mass of the polymers. The deviations of the entanglement numbers from their equilibrium values give rise to transient forces, which, together with the conservative forces derived from the potential of mean force, govern the displacements of the particles. We have applied our model to a melt of C(800)H(1602) chains at 450 K and have found good agreement with experiments and more detailed simulations. Properties addressed in this paper are radial distribution functions, dynamic structure factors, and linear as well as nonlinear rheological properties.

  7. Prediction of muscle activation for an eye movement with finite element modeling.

    PubMed

    Karami, Abbas; Eghtesad, Mohammad; Haghpanah, Seyyed Arash

    2017-10-01

    In this paper, a 3D finite element (FE) modeling is employed in order to predict extraocular muscles' activation and investigate force coordination in various motions of the eye orbit. A continuum constitutive hyperelastic model is employed for material description in dynamic modeling of the extraocular muscles (EOMs). Two significant features of this model are accurate mass modeling with FE method and stimulating EOMs for motion through muscle activation parameter. In order to validate the eye model, a forward dynamics simulation of the eye motion is carried out by variation of the muscle activation. Furthermore, to realize muscle activation prediction in various eye motions, two different tracking-based inverse controllers are proposed. The performance of these two inverse controllers is investigated according to their resulted muscle force magnitude and muscle force coordination. The simulation results are compared with the available experimental data and the well-known existing neurological laws. The comparison authenticates both the validation and the prediction results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Operational Alignment in Predator Training Research

    DTIC Science & Technology

    2014-04-21

    to the SO and, down the road, the Mission Intelligence Coordinator ( MIC ). 2. X-Plane 9 is COTS software and serves as the aeronautical simulation...uses to communicate with ground forces, command and control (C2) elements, and the Mission Intelligence Coordinator ( MIC ), among others. 10. LNCS...because key events can be scripted and initiated as appropriate, while distractor events like a villager walking around a market can be controlled by

  9. Multi-limbed locomotion systems for space construction and maintenance

    NASA Technical Reports Server (NTRS)

    Waldron, K. J.; Klein, C. A.

    1987-01-01

    A well developed technology of coordination of multi-limbed locomotory systems is now available. Results from a NASA sponsored study of several years ago are presented. This was a simulation study of a three-limbed locomotion/manipulation system. Each limb had six degrees of freedom and could be used either as a locomotory grasping hand-holds, or as a manipulator. The focus of the study was kinematic coordination algorithms. The presentation will also include very recent results from the Adaptive Suspension Vehicle Project. The Adaptive Suspension Vehicle (ASV) is a legged locomotion system designed for terrestrial use which is capable of operating in completely unstructured terrain in either a teleoperated or operator-on-board mode. Future development may include autonomous operation. The ASV features a very advanced coordination and control system which could readily be adapted to operation in space. An inertial package with a vertical gyro, and rate gyros and accelerometers on three orthogonal axes provides body position information at high bandwidth. This is compared to the operator's commands, injected via a joystick to provide a commanded force system on the vehicle's body. This system is, in turn, decomposed by a coordination algorithm into force commands to those legs which are in contact with the ground.

  10. New coordination features; a bridging pyridine and the forced shortest non-covalent distance between two CO3 2– species† †Electronic supplementary information (ESI) available: Mass Spectrometry and BVS analysis CCDC 996546–996548. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c4sc02491e Click here for additional data file. Click here for additional data file.

    PubMed Central

    Velasco, V.; Aguilà, D.; Barrios, L. A.; Borilovic, I.; Roubeau, O.; Ribas-Ariño, J.; Fumanal, M.; Teat, S. J.

    2015-01-01

    The aerobic reaction of the multidentate ligand 2,6-bis-(3-oxo-3-(2-hydroxyphenyl)-propionyl)-pyridine, H4L, with Co(ii) salts in strong basic conditions produces the clusters [Co4(L)2(OH)(py)7]NO3 (1) and [Co8Na4(L)4(OH)2(CO3)2(py)10](BF4)2 (2). Analysis of their structure unveils unusual coordination features including a very rare bridging pyridine ligand or two trapped carbonate anions within one coordination cage, forced to stay at an extremely close distance (d O···O = 1.946 Å). This unprecedented non-bonding proximity represents a meeting point between long covalent interactions and “intermolecular” contacts. These original motifs have been analysed here through DFT calculations, which have yielded interaction energies and the reduced repulsion energy experimented by both CO3 2– anions when located in close proximity inside the coordination cage. PMID:28616127

  11. Can the Army Provide Bulk Petroleum Support to Joint Force 2020?

    DTIC Science & Technology

    2013-03-01

    Petroleum Officer (JPO) and one or more Sub Area Petroleum Officers ( SAPO ). The JPO coordinates petroleum support to all forces in a theater on behalf...position is the SAPO , established by the Combatant Commander or a Joint Force Commander (JFC) to fulfill bulk petroleum planning and execution in a...section of the theater for which the JPO is responsible.7 A key duty of the SAPO is to advise the JFC and his/her staff on petroleum logistics

  12. Isothermal magnetostatic atmospheres. III - Similarity solutions with current proportional to the magnetic potential squared. [in solar corona

    NASA Technical Reports Server (NTRS)

    Webb, G. M.

    1988-01-01

    A family of isothermal magnetostatic atmospheres with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry is considered. It is assumed that the current (J) is proportional to the square of the magnetostatic potential and falls off exponentially with distance. Results are presented for the contributions of the anisotropic J x B force (where B is the magnetic field induction), the gravitational force, and the gas pressure gradient to the force balance.

  13. Effects of adenosine triphosphate concentration on motor force regulation during skeletal muscle contraction

    NASA Astrophysics Data System (ADS)

    Wei, J.; Dong, C.; Chen, B.

    2017-04-01

    We employ a mechanical model of sarcomere to quantitatively investigate how adenosine triphosphate (ATP) concentration affects motor force regulation during skeletal muscle contraction. Our simulation indicates that there can be negative cross-bridges resisting contraction within the sarcomere and higher ATP concentration would decrease the resistance force from negative cross-bridges by promoting their timely detachment. It is revealed that the motor force is well regulated only when ATP concentration is above a certain level. These predictions may provide insights into the role of ATP in regulating coordination among multiple motors.

  14. Unification of Forces: The Road to Jointness?

    DTIC Science & Technology

    1991-05-15

    tend to resist large change--or innovation. Because organizations value "predictability, stability, and certainty," incremental change is the...preferred mode of behavior for organizations.29 Unification of the forces would be a large, rather than an incremental , change; thus, the services would...coordinating planning and bidgeting , providing unified direction, accounting and controlling weapons and equipment acquisition, eliminating duplication of

  15. Development and Experimental Evaluation of a Retrieval System for Air Force Control Display Information. Final Report.

    ERIC Educational Resources Information Center

    Debons, Anthony; and Others

    A proposed classification system was studied to determine its efficacy to the Air Force Control-Display Area. Based on negative outcomes from a logical assessment of the proposed system, an alternate system was proposed to include the coordinate index concept. Upon development of a thesaurus and an index system for 106 documents on VSTOL/VTOL…

  16. Improving anterior deltoid activity in a musculoskeletal shoulder model - an analysis of the torque-feasible space at the sternoclavicular joint.

    PubMed

    Ingram, David; Engelhardt, Christoph; Farron, Alain; Terrier, Alexandre; Müllhaupt, Philippe

    2016-01-01

    Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.

  17. Effect of cation-anion interactions on the structural and vibrational properties of 1-buthyl-3-methyl imidazolium nitrate ionic liquid

    NASA Astrophysics Data System (ADS)

    Kausteklis, Jonas; Aleksa, Valdemaras; Iramain, Maximiliano A.; Brandán, Silvia Antonia

    2018-07-01

    The cation-anion interactions present in the 1-butyl-3-methylimidazolium nitrate ionic liquid [BMIm][NO3] were studied by using density functional theory (DFT) calculations and the experimental FT-Raman spectrum in liquid phase and its available FT-IR spectrum. For the three most stable conformers found in the potential energy surface and their 1-butyl-3-methylimidazolium [BMIm] cation, the atomic charges, molecular electrostatic potentials, stabilization energies, bond orders and topological properties were computed by using NBO and AIM calculations and the hybrid B3LYP level of theory with the 6-31G* and 6-311++G** basis sets. The force fields, force constants and complete vibrational assignments were also reported for those species by using their internal coordinates and the scaled quantum mechanical force field (SQMFF) approach. The dimeric species of [BMIm][NO3] were also considered because their presence could probably explain the most intense bands observed at 1344 and 1042 cm-1 in both experimental FT-IR and FT-Raman spectra, respectively. The geometrical parameters suggest monodentate cation-anion coordination while the studies by charges, NBO and AIM calculations support bidentate coordinations between those two species. Additionally several quantum chemical descriptors were also calculated in order to interpret various molecular properties such as electronic structure, reactivity of those species and predict their gas phase behaviours.

  18. An Overview of the NOAA Drought Task Force

    NASA Technical Reports Server (NTRS)

    Schubert, S.; Mo, K.; Peters-Lidard, C.; Wood, A.

    2012-01-01

    The charge of the NOAA Drought Task Force is to coordinate and facilitate the various MAPP-funded research efforts with the overall goal of achieving significant advances in understanding and in the ability to monitor and predict drought over North America. In order to achieve this, the task force has developed a Drought Test-bed that individual research groups can use to test/evaluate methods and ideas. Central to this is a focus on three high profile North American droughts (1998-2004 western US drought, 2006-2007 SE US drought, 2011- current Tex-Mex drought) to facilitate collaboration among projects, including the development of metrics to assess the quality of monitoring and prediction products, and the development of an experimental drought monitoring and prediction system that incorporates and assesses recent advances. This talk will review the progress and plans of the task force, including efforts to help advance official national drought products, and the development of early warning systems by the National Integrated Drought Information System (NIDIS). Coordination with other relevant national and international efforts such as the emerging NMME capabilities and the international effort to develop a Global Drought Information System (GDIS) will be discussed.

  19. Fine structure in the transition region: reaction force analyses of water-assisted proton transfers.

    PubMed

    Yepes, Diana; Murray, Jane S; Santos, Juan C; Toro-Labbé, Alejandro; Politzer, Peter; Jaque, Pablo

    2013-07-01

    We have analyzed the variation of the reaction force F(ξ) and the reaction force constant κ(ξ) along the intrinsic reaction coordinates ξ of the water-assisted proton transfer reactions of HX-N = Y (X,Y = O,S). The profile of the force constant of the vibration associated with the reactive mode, k ξ (ξ), was also determined. We compare our results to the corresponding intramolecular proton transfers in the absence of a water molecule. The presence of water promotes the proton transfers, decreasing the energy barriers by about 12 - 15 kcal mol(-1). This is due in part to much smaller bond angle changes being needed than when water is absent. The κ(ξ) profiles along the intrinsic reaction coordinates for the water-assisted processes show striking and intriguing differences in the transition regions. For the HS-N = S and HO-N = S systems, two κ(ξ) minima are obtained, whereas for HO-N = O only one minimum is found. The k ξ (ξ) show similar behavior in the transition regions. We propose that this fine structure reflects the degree of synchronicity of the two proton migrations in each case.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics

    Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less

  1. Measuring Rock-Fluid Adhesion Directly

    NASA Astrophysics Data System (ADS)

    Tadmor, R.

    2017-12-01

    We show how to measure directly solid-liquid adhesion. We consider the normal adhesion, the work adhesion, and the lateral adhesion. The technique at the center of the method is Centrifugal Adhesion Balance (CAB) which allows coordinated manipulation of normal and lateral forces. For example: 1. It allows to induce an increase in the normal force which pulls on a liquid drop while keeping zero lateral force. This method mimics a drop that is subjected to a gravitational force that is gradually increasing. 2. It allows to increase the lateral force at zero normal force, mimicking zero gravity. From this one can obtain additional solid-liquid interaction parameters. When performing work of adhesion measurements, the values obtained are independent of drop size and are in agreement with theoretical predictions.

  2. King has no clothes: The role of the military in responding to a terrorist chemical/biological attack. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osterman, J.L.

    1996-06-14

    The United States has begun a program of counterproliferation in order to preempt the use of WMD by such elements, however, the ability to respond to the terrorist employment of biological/chemical weapons is absent. Given the structure, capability and technical expertise in the Federal Emergency Management Agency (FEMA) and the Federal Bureau of Investigation (FBI), the Department of Defense (DoD) will be tasked to conduct the response to such an incident. The geographical Commander in Chief (CINC) and the appointed Joint Task Force (JTF) commander will ultimately be assigned the response mission. Planning, training and coordination is required to developmore » a force capable of responding in a timely and coordinated manner.« less

  3. Analysis of the Effect of UTI-UTC to High Precision Orbit

    NASA Astrophysics Data System (ADS)

    Shin, Dongseok; Kwak, Sunghee; Kim, Tag-Gon

    1999-12-01

    As the spatial resolution of remote sensing satellites becomes higher, very accurate determination of the position of a LEO (Low Earth Orbit) satellite is demanding more than ever. Non-symmetric Earth gravity is the major perturbation force to LEO satellites. Since the orbit propagation is performed in the celestial frame while Earth gravity is defined in the terrestrial frame, it is required to convert the coordinates of the satellite from one to the other accurately. Unless the coordinate conversion between the two frames is performed accurately the orbit propagation calculates incorrect Earth gravitational force at a specific time instant, and hence, causes errors in orbit prediction. The coordinate conversion between the two frames involves precession, nutation, Earth rotation and polar motion. Among these factors, unpredictability and uncertainty of Earth rotation, called UTI-UTC, is the largest error source. In this paper, the effect of UTI-UTC on the accuracy of the LEO propagation is introduced, tested and analzed. Considering the maximum unpredictability of UTI-UTC, 0.9 seconds, the meaningful order of non-spherical Earth harmonic functions is derived.

  4. The KALI multi-arm robot programming and control environment

    NASA Technical Reports Server (NTRS)

    Backes, Paul; Hayati, Samad; Hayward, Vincent; Tso, Kam

    1989-01-01

    The KALI distributed robot programming and control environment is described within the context of its use in the Jet Propulsion Laboratory (JPL) telerobot project. The purpose of KALI is to provide a flexible robot programming and control environment for coordinated multi-arm robots. Flexibility, both in hardware configuration and software, is desired so that it can be easily modified to test various concepts in robot programming and control, e.g., multi-arm control, force control, sensor integration, teleoperation, and shared control. In the programming environment, user programs written in the C programming language describe trajectories for multiple coordinated manipulators with the aid of KALI function libraries. A system of multiple coordinated manipulators is considered within the programming environment as one motion system. The user plans the trajectory of one controlled Cartesian frame associated with a motion system and describes the positions of the manipulators with respect to that frame. Smooth Cartesian trajectories are achieved through a blending of successive path segments. The manipulator and load dynamics are considered during trajectory generation so that given interface force limits are not exceeded.

  5. Mishap Investigation Team (MIT) - Barksdale AFB, Louisiana

    NASA Technical Reports Server (NTRS)

    Stepaniak, Philip

    2005-01-01

    The Shuttle Program is organized to support a Shuttle mishap using the resources of the MIT. The afternoon of Feb. 1, 2003, the MIT deployed to Barksdale AFB. This location became the investigative center and interim storage location for crewmembers received from the Lufkin Disaster Field Office (DFO). Working under the leadership of the MIT Lead, the medical team executed a short-term plan that included search, recovery, and identification including coordination with the Armed Forces Institute of Pathology Temporary operations was set up at Barksdale Air Force Base for two weeks. During this time, coordination with the DFO field recovery teams, AFIP personnel, and the crew surgeons was on going. In addition, the crewmember families and NASA management were updated daily. The medical team also dealt with public reports and questions concerning biological and chemical hazards, which were coordinated with SPACEHAB, Inc., Kennedy Space Center (KSC) Medical Operations and the Johnson Space Center (JSC) Space Medicine office. After operations at Barksdale were concluded the medical team transitioned back to Houston and a long-term search, recovery and identification plan was developed.

  6. Application of the LEPS technique for Quantitative Precipitation Forecasting (QPF) in Southern Italy: a preliminary study

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.

    2006-03-01

    This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.

  7. A load-based mechanism for inter-leg coordination in insects

    PubMed Central

    2017-01-01

    Animals rely on an adaptive coordination of legs during walking. However, which specific mechanisms underlie coordination during natural locomotion remains largely unknown. One hypothesis is that legs can be coordinated mechanically based on a transfer of body load from one leg to another. To test this hypothesis, we simultaneously recorded leg kinematics, ground reaction forces and muscle activity in freely walking stick insects (Carausius morosus). Based on torque calculations, we show that load sensors (campaniform sensilla) at the proximal leg joints are well suited to encode the unloading of the leg in individual steps. The unloading coincides with a switch from stance to swing muscle activity, consistent with a load reflex promoting the stance-to-swing transition. Moreover, a mechanical simulation reveals that the unloading can be ascribed to the loading of a specific neighbouring leg, making it exploitable for inter-leg coordination. We propose that mechanically mediated load-based coordination is used across insects analogously to mammals. PMID:29187626

  8. Recognizing human actions by learning and matching shape-motion prototype trees.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2012-03-01

    A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.

  9. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  10. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  11. Efficient critical design load case identification for floating offshore wind turbines with a reduced nonlinear model

    NASA Astrophysics Data System (ADS)

    Matha, Denis; Sandner, Frank; Schlipf, David

    2014-12-01

    Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.

  12. Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.

    PubMed

    Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David

    2008-07-01

    Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.

  13. Emission Sectoral Contributions of Foreign Emissions to Particulate Matter Concentrations over South Korea

    NASA Astrophysics Data System (ADS)

    Kim, E.; Kim, S.; Kim, H. C.; Kim, B. U.; Cho, J. H.; Woo, J. H.

    2017-12-01

    In this study, we investigated the contributions of major emission source categories located upwind of South Korea to Particulate Matter (PM) in South Korea. In general, air quality in South Korea is affected by anthropogenic air pollutants emitted from foreign countries including China. Some studies reported that foreign emissions contributed 50 % of annual surface PM total mass concentrations in the Seoul Metropolitan Area, South Korea in 2014. Previous studies examined PM contributions of foreign emissions from all sectors considering meteorological variations. However, little studies conducted to assess contributions of specific foreign source categories. Therefore, we attempted to estimate sectoral contributions of foreign emissions from China to South Korea PM using our air quality forecasting system. We used Model Inter-Comparison Study in Asia 2010 for foreign emissions and Clean Air Policy Support System 2010 emission inventories for domestic emissions. To quantify contributions of major emission sectors to South Korea PM, we applied the Community Multi-scale Air Quality system with brute force method by perturbing emissions from industrial, residential, fossil-fuel power plants, transportation, and agriculture sectors in China. We noted that industrial sector was pre-dominant over the region except during cold season for primary PMs when residential emissions drastically increase due to heating demand. This study will benefit ensemble air quality forecasting and refined control strategy design by providing quantitative assessment on seasonal contributions of foreign emissions from major source categories.

  14. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  15. Selectivity trend of gas separation through nanoporous graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hongjun; Chen, Zhongfang; Dai, Sheng

    2015-04-15

    By means of molecular dynamics (MD) simulations, we demonstrate that porous graphene can efficiently separate gases according to their molecular sizes. The flux sequence from the classical MD simulation is H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4}, which generally follows the trend in the kinetic diameters. This trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO{sub 2}/N{sub 2} mixtures further demonstrate themore » separation capability of nanoporous graphene. - Graphical abstract: Classical molecular dynamics simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene, in excellent agreement with a recent experiment. - Highlights: • Classical MD simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene. • Free energy calculations yield permeation barriers for those gases. • Selectivities for several gas pairs are estimated from the free-energy barriers and the kinetic theory of gases. • The selectivity trend is in excellent agreement with a recent experiment.« less

  16. A Survey of Image Encryption Algorithms

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender; Sardana, Pranshul

    2017-12-01

    Security of data/images is one of the crucial aspects in the gigantic and still expanding domain of digital transfer. Encryption of images is one of the well known mechanisms to preserve confidentiality of images over a reliable unrestricted public media. This medium is vulnerable to attacks and hence efficient encryption algorithms are necessity for secure data transfer. Various techniques have been proposed in literature till date, each have an edge over the other, to catch-up to the ever growing need of security. This paper is an effort to compare the most popular techniques available on the basis of various performance metrics like differential, statistical and quantitative attacks analysis. To measure the efficacy, all the modern and grown-up techniques are implemented in MATLAB-2015. The results show that the chaotic schemes used in the study provide highly scrambled encrypted images having uniform histogram distribution. In addition, the encrypted images provided very less degree of correlation coefficient values in horizontal, vertical and diagonal directions, proving their resistance against statistical attacks. In addition, these schemes are able to resist differential attacks as these showed a high sensitivity for the initial conditions, i.e. pixel and key values. Finally, the schemes provide a large key spacing, hence can resist the brute force attacks, and provided a very less computational time for image encryption/decryption in comparison to other schemes available in literature.

  17. Saturn Apollo Program

    NASA Image and Video Library

    1965-04-16

    This photograph depicts a dramatic view of the first test firing of all five F-1 engines for the Saturn V S-IC stage at the Marshall Space Flight Center. The testing lasted a full duration of 6.5 seconds. It also marked the first test performed in the new S-IC static test stand and the first test using the new control blockhouse. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. Required to hold down the brute force of a 7,500,000-pound thrust, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900 ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minute.

  18. Heterozygote PCR product melting curve prediction.

    PubMed

    Dwight, Zachary L; Palais, Robert; Kent, Jana; Wittwer, Carl T

    2014-03-01

    Melting curve prediction of PCR products is limited to perfectly complementary strands. Multiple domains are calculated by recursive nearest neighbor thermodynamics. However, the melting curve of an amplicon containing a heterozygous single-nucleotide variant (SNV) after PCR is the composite of four duplexes: two matched homoduplexes and two mismatched heteroduplexes. To better predict the shape of composite heterozygote melting curves, 52 experimental curves were compared with brute force in silico predictions varying two parameters simultaneously: the relative contribution of heteroduplex products and an ionic scaling factor for mismatched tetrads. Heteroduplex products contributed 25.7 ± 6.7% to the composite melting curve, varying from 23%-28% for different SNV classes. The effect of ions on mismatch tetrads scaled to 76%-96% of normal (depending on SNV class) and averaged 88 ± 16.4%. Based on uMelt (www.dna.utah.edu/umelt/umelt.html) with an expanded nearest neighbor thermodynamic set that includes mismatched base pairs, uMelt HETS calculates helicity as a function of temperature for homoduplex and heteroduplex products, as well as the composite curve expected from heterozygotes. It is an interactive Web tool for efficient genotyping design, heterozygote melting curve prediction, and quality control of melting curve experiments. The application was developed in Actionscript and can be found online at http://www.dna.utah.edu/hets/. © 2013 WILEY PERIODICALS, INC.

  19. Comparison of two laryngeal tissue fiber constitutive models

    NASA Astrophysics Data System (ADS)

    Hunter, Eric J.; Palaparthi, Anil Kumar Reddy; Siegmund, Thomas; Chan, Roger W.

    2014-02-01

    Biological tissues are complex time-dependent materials, and the best choice of the appropriate time-dependent constitutive description is not evident. This report reviews two constitutive models (a modified Kelvin model and a two-network Ogden-Boyce model) in the characterization of the passive stress-strain properties of laryngeal tissue under tensile deformation. The two models are compared, as are the automated methods for parameterization of tissue stress-strain data (a brute force vs. a common optimization method). Sensitivity (error curves) of parameters from both models and the optimized parameter set are calculated and contrast by optimizing to the same tissue stress-strain data. Both models adequately characterized empirical stress-strain datasets and could be used to recreate a good likeness of the data. Nevertheless, parameters in both models were sensitive to measurement errors or uncertainties in stress-strain, which would greatly hinder the confidence in those parameters. The modified Kelvin model emerges as a potential better choice for phonation models which use a tissue model as one component, or for general comparisons of the mechanical properties of one type of tissue to another (e.g., axial stress nonlinearity). In contrast, the Ogden-Boyce model would be more appropriate to provide a basic understanding of the tissue's mechanical response with better insights into the tissue's physical characteristics in terms of standard engineering metrics such as shear modulus and viscosity.

  20. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  1. The evolution of parental care in insects: A test of current hypotheses.

    PubMed

    Gilbert, James D J; Manica, Andrea

    2015-05-01

    Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced-for example, by caring for multiple broods-but rejecting the "enhanced fecundity" hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. © 2015 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  2. Evaluation of CMAQ and CAMx Ensemble Air Quality Forecasts during the 2015 MAPS-Seoul Field Campaign

    NASA Astrophysics Data System (ADS)

    Kim, E.; Kim, S.; Bae, C.; Kim, H. C.; Kim, B. U.

    2015-12-01

    The performance of Air quality forecasts during the 2015 MAPS-Seoul Field Campaign was evaluated. An forecast system has been operated to support the campaign's daily aircraft route decisions for airborne measurements to observe long-range transporting plume. We utilized two real-time ensemble systems based on the Weather Research and Forecasting (WRF)-Sparse Matrix Operator Kernel Emissions (SMOKE)-Comprehensive Air quality Model with extensions (CAMx) modeling framework and WRF-SMOKE- Community Multi_scale Air Quality (CMAQ) framework over northeastern Asia to simulate PM10 concentrations. Global Forecast System (GFS) from National Centers for Environmental Prediction (NCEP) was used to provide meteorological inputs for the forecasts. For an additional set of retrospective simulations, ERA Interim Reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) was also utilized to access forecast uncertainties from the meteorological data used. Model Inter-Comparison Study for Asia (MICS-Asia) and National Institute of Environment Research (NIER) Clean Air Policy Support System (CAPSS) emission inventories are used for foreign and domestic emissions, respectively. In the study, we evaluate the CMAQ and CAMx model performance during the campaign by comparing the results to the airborne and surface measurements. Contributions of foreign and domestic emissions are estimated using a brute force method. Analyses on model performance and emissions will be utilized to improve air quality forecasts for the upcoming KORUS-AQ field campaign planned in 2016.

  3. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less

  4. Strategies for resolving conflict: their functional and dysfunctional sides.

    PubMed

    Stimac, M

    1982-01-01

    Conflict in the workplace can have a beneficial effect. That is if appropriately resolved, it plays an important part in effective problem solving, according to author Michele Stimac, associate dean, curriculum and instruction, and professor at Pepperdine University Graduate School of Education and Psychology. She advocates confrontation--by way of negotiation rather than brute force--as the best way to resolve conflict, heal wounds, reconcile the parties involved, and give the resolution long life. But she adds that if a person who has though through when, where, and how to confront someone foresees only disaster, avoidance is the best path to take. The emphasis here is on strategy. Avoiding confrontation, for example, is not a strategic move unless it is backed by considered judgment. Stimac lays out these basic tenets for engaging in sound negotiation: (1) The confrontation should take place in neutral territory. (2) The parties should actively listen to each other. (3) Each should assert his or her right to fair treatment. (4) Each must allow the other to retain his or her dignity. (5) The parties should seek a consensus on the issues inconflict, their resolution, and the means of reducing any tension that results from the resolution. (6) The parties should exhibit a spirit of give and take--that is, of compromise. (7) They should seek satisfaction for all involved.

  5. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  6. Efficiently mapping structure-property relationships of gas adsorption in porous materials: application to Xe adsorption.

    PubMed

    Kaija, A R; Wilmer, C E

    2017-09-08

    Designing better porous materials for gas storage or separations applications frequently leverages known structure-property relationships. Reliable structure-property relationships, however, only reveal themselves when adsorption data on many porous materials are aggregated and compared. Gathering enough data experimentally is prohibitively time consuming, and even approaches based on large-scale computer simulations face challenges. Brute force computational screening approaches that do not efficiently sample the space of porous materials may be ineffective when the number of possible materials is too large. Here we describe a general and efficient computational method for mapping structure-property spaces of porous materials that can be useful for adsorption related applications. We describe an algorithm that generates random porous "pseudomaterials", for which we calculate structural characteristics (e.g., surface area, pore size and void fraction) and also gas adsorption properties via molecular simulations. Here we chose to focus on void fraction and Xe adsorption at 1 bar, 5 bar, and 10 bar. The algorithm then identifies pseudomaterials with rare combinations of void fraction and Xe adsorption and mutates them to generate new pseudomaterials, thereby selectively adding data only to those parts of the structure-property map that are the least explored. Use of this method can help guide the design of new porous materials for gas storage and separations applications in the future.

  7. Detecting rare, abnormally large grains by x-ray diffraction

    DOE PAGES

    Boyce, Brad L.; Furnish, Timothy Allen; Padilla, H. A.; ...

    2015-07-16

    Bimodal grain structures are common in many alloys, arising from a number of different causes including incomplete recrystallization and abnormal grain growth. These bimodal grain structures have important technological implications, such as the well-known Goss texture which is now a cornerstone for electrical steels. Yet our ability to detect bimodal grain distributions is largely confined to brute force cross-sectional metallography. The present study presents a new method for rapid detection of unusually large grains embedded in a sea of much finer grains. Traditional X-ray diffraction-based grain size measurement techniques such as Scherrer, Williamson–Hall, or Warren–Averbach rely on peak breadth andmore » shape to extract information regarding the average crystallite size. However, these line broadening techniques are not well suited to identify a very small fraction of abnormally large grains. The present method utilizes statistically anomalous intensity spikes in the Bragg peak to identify regions where abnormally large grains are contributing to diffraction. This needle-in-a-haystack technique is demonstrated on a nanocrystalline Ni–Fe alloy which has undergone fatigue-induced abnormal grain growth. In this demonstration, the technique readily identifies a few large grains that occupy <0.00001 % of the interrogation volume. Finally, while the technique is demonstrated in the current study on nanocrystalline metal, it would likely apply to any bimodal polycrystal including ultrafine grained and fine microcrystalline materials with sufficiently distinct bimodal grain statistics.« less

  8. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

  9. A new feature detection mechanism and its application in secured ECG transmission with noise masking.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim

    2009-04-01

    With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.

  10. RCNP Project on Polarized {sup 3}He Ion Sources - From Optical Pumping to Cryogenic Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, M.; Inomata, T.; Takahashi, Y.

    2009-08-04

    A polarized {sup 3}He ion source has been developed at RCNP for intermediate and high energy spin physics. Though we started with an OPPIS (Optical Pumping Polarized Ion Source), it could not provide highly polarized {sup 3}He beam because of fundamental difficulties. Subsequently to this unhappy result, we examined novel types of the polarized {sup 3}He ion source, i.e., EPPIS (Electron Pumping Polarized Ion Source), and ECRPIS (ECR Polarized Ion Source) experimentally or theoretically, respectively. However, attainable {sup 3}He polarization degrees and beam intensities were still insufficient for practical use. A few years later, we proposed a new idea formore » the polarized {sup 3}He ion source, SEPIS (Spin Exchange Polarized Ion Source) which is based on enhanced spin-exchange cross sections at low incident energies for {sup 3}He{sup +}+Rb, and its feasibility was experimentally examined.Recently, we started a project on polarized {sup 3}He gas generated by the brute force method with low temperature (approx4 mK) and strong magnetic field (approx17 T), and rapid melting of highly polarized solid {sup 3}He followed by gasification. When this project will be successful, highly polarized {sup 3}He gas will hopefully be used for a new type of the polarized {sup 3}He ion source.« less

  11. Top-down constraints of regional emissions for KORUS-AQ 2016 field campaign

    NASA Astrophysics Data System (ADS)

    Bae, M.; Yoo, C.; Kim, H. C.; Kim, B. U.; Kim, S.

    2017-12-01

    Accurate estimations of emission rates form local and international sources are essential in regional air quality simulations, especially in assessing the relative contributions from international emission sources. While bottom-up constructions of emission inventories provide detailed information on specific emission types, they are limited to cover regions with rapid change of anthropogenic emissions (e.g. China) or regions without enough socioeconomic information (e.g. North Korea). We utilized space-borne monitoring of major pollutant precursors to construct a realistic emission inputs for chemistry transport models during the KORUS-AQ 2016 field campaign. Base simulation was conducted using WRF, SMOKE, and CMAQ modeling frame using CREATE 2015 (Asian countries) and CAPSS 2013 (South Korea) emissions inventories. NOx, SO2 and VOC model emissions are adjusted using the column density comparisons ratios (between modeled and observed NO2, SO2 and HCHO column densities) and emission-to-density conversion ratio (from model). Brute force perturbation method was used to separate contributions from North Korea, China and South Korea for flight pathways during the field campaign. Backward-Tracking Model Analyzer (BMA), based on NOAA HYSPLIT trajectory and dispersion model, are also utilized to track histories of chemical processes and emission source apportionment. CMAQ simulations were conducted over East Asia (27-km) and over South and North Korea (9-km) during KORUS-AQ campaign (1st May to 10th June 2016).

  12. Contribution of regional-scale fire events to ozone and PM2.5 ...

    EPA Pesticide Factsheets

    Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas

  13. Artillery Engagement Simulation

    DTIC Science & Technology

    1980-05-01

    coordinate* of the burst point to 10 meter accuracy (4 digit number). 7. Press R/S. Calculator will run for approximately one second and display the...northing coordinate* of the burst point to 10 meter accuracy (4 digit number). 8. If it is not desired to send azimuth and distance instructions to the...Now Delhi 1 USA Agey for Aviation Safety, Ft Rucker. ATTN: Educ Advisor I Pars Rsch Ofc, Libary , AKA. Israel Defense Forces I USA Aviation Sch. Ft

  14. Mechanics of deformations in terms of scalar variables

    NASA Astrophysics Data System (ADS)

    Ryabov, Valeriy A.

    2017-05-01

    Theory of particle and continuous mechanics is developed which allows a treatment of pure deformation in terms of the set of variables "coordinate-momentum-force" instead of the standard treatment in terms of tensor-valued variables "strain-stress." This approach is quite natural for a microscopic description of atomic system, according to which only pointwise forces caused by the stress act to atoms making a body deform. The new concept starts from affine transformation of spatial to material coordinates in terms of the stretch tensor or its analogs. Thus, three principal stretches and three angles related to their orientation form a set of six scalar variables to describe deformation. Instead of volume-dependent potential used in the standard theory, which requires conditions of equilibrium for surface and body forces acting to a volume element, a potential dependent on scalar variables is introduced. A consistent introduction of generalized force associated with this potential becomes possible if a deformed body is considered to be confined on the surface of torus having six genuine dimensions. Strain, constitutive equations and other fundamental laws of the continuum and particle mechanics may be neatly rewritten in terms of scalar variables. Giving a new presentation for finite deformation new approach provides a full treatment of hyperelasticity including anisotropic case. Derived equations of motion generate a new kind of thermodynamical ensemble in terms of constant tension forces. In this ensemble, six internal deformation forces proportional to the components of Irving-Kirkwood stress are controlled by applied external forces. In thermodynamical limit, instead of the pressure and volume as state variables, this ensemble employs deformation force measured in kelvin unit and stretch ratio.

  15. Deficits in Lower Limb Muscle Reflex Contraction Latency and Peak Force Are Associated With Impairments in Postural Control and Gross Motor Skills of Children With Developmental Coordination Disorder: A Cross-Sectional Study.

    PubMed

    Fong, Shirley S M; Ng, Shamay S M; Guo, X; Wang, Yuling; Chung, Raymond C K; Stat, Grad; Ki, W Y; Macfarlane, Duncan J

    2015-10-01

    This cross-sectional, exploratory study aimed to compare neuromuscular performance, balance and motor skills proficiencies of typically developing children and those with developmental coordination disorder (DCD) and to determine associations of these neuromuscular factors with balance and motor skills performances in children with DCD.One hundred thirty children with DCD and 117 typically developing children participated in the study. Medial hamstring and gastrocnemius muscle activation onset latencies in response to an unexpected posterior-to-anterior trunk perturbation were assessed by electromyography and accelerometer. Hamstring and gastrocnemius muscle peak force and time to peak force were quantified by dynamometer, and balance and motor skills performances were evaluated with the Movement Assessment Battery for Children (MABC).Independent t tests revealed that children with DCD had longer hamstring and gastrocnemius muscle activation onset latencies (P < 0.001) and lower isometric peak forces (P < 0.001), but not times to peak forces (P > 0.025), than the controls. Multiple regression analysis accounting for basic demographics showed that gastrocnemius peak force was independently associated with the MABC balance subscore and ball skills subscore, accounting for 5.7% (P = 0.003) and 8.5% (P = 0.001) of the variance, respectively. Gastrocnemius muscle activation onset latency also explained 11.4% (P < 0.001) of the variance in the MABC ball skills subscore.Children with DCD had delayed leg muscle activation onset times and lower isometric peak forces. Gastrocnemius peak force was associated with balance and ball skills performances, whereas timing of gastrocnemius muscle activation was a determinant of ball skill performance in the DCD population.

  16. 8 CFR 313.1 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... communism, in all countries of the world through the medium of an internationally coordinated communist... advocates or teaches: (1) Opposition to all organized government; (2) The overthrow, by force or violence or...

  17. 8 CFR 313.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... communism, in all countries of the world through the medium of an internationally coordinated communist... advocates or teaches: (1) Opposition to all organized government; (2) The overthrow, by force or violence or...

  18. 8 CFR 313.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... communism, in all countries of the world through the medium of an internationally coordinated communist... advocates or teaches: (1) Opposition to all organized government; (2) The overthrow, by force or violence or...

  19. 8 CFR 313.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... communism, in all countries of the world through the medium of an internationally coordinated communist... advocates or teaches: (1) Opposition to all organized government; (2) The overthrow, by force or violence or...

  20. 8 CFR 313.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... communism, in all countries of the world through the medium of an internationally coordinated communist... advocates or teaches: (1) Opposition to all organized government; (2) The overthrow, by force or violence or...

Top