Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment
NASA Astrophysics Data System (ADS)
Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung
2016-07-01
Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.
1976-07-30
Interface Requirements 4 3.1.1.1 Interface Block Diagram 4 3.1.1.2 Detailed Interface Definition 7 3.1.1.2.1 Subsystems 7 3.1.1.2.2 Controls & Displays 11 r...116 3.2.3.2 Navigation Brute Force 121 3.2.3.3 Cargo Brute Force 125 3.2.3.4 Sensor Brute Force 129 3.2.3.5 Controls /Displays Brute Force 135 3.2.3.6...STD-T553 Multiplex Data Bus, with the avionic subsystems, flight * control system, the controls /displays, engine sensors, and airframe sensors. 3.1
Analysis of brute-force break-ins of a palmprint authentication system.
Kong, Adams W K; Zhang, David; Kamel, Mohamed
2006-10-01
Biometric authentication systems are widely applied because they offer inherent advantages over classical knowledge-based and token-based personal-identification approaches. This has led to the development of products using palmprints as biometric traits and their use in several real applications. However, as biometric systems are vulnerable to replay, database, and brute-force attacks, such potential attacks must be analyzed before biometric systems are massively deployed in security systems. This correspondence proposes a projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system. To validate the proposed model, we have conducted a simulation. Its results demonstrate that the proposed model can accurately estimate the probability. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
Near-Neighbor Algorithms for Processing Bearing Data
1989-05-10
neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
Single realization stochastic FDTD for weak scattering waves in biological random media
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2015-01-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153
Brute force absorption contrast microtomography
NASA Astrophysics Data System (ADS)
Davis, Graham R.; Mills, David
2014-09-01
In laboratory X-ray microtomography (XMT) systems, the signal-to-noise ratio (SNR) is typically determined by the X-ray exposure due to the low flux associated with microfocus X-ray tubes. As the exposure time is increased, the SNR improves up to a point where other sources of variability dominate, such as differences in the sensitivities of adjacent X-ray detector elements. Linear time-delay integration (TDI) readout averages out detector sensitivities on the critical horizontal direction and equiangular TDI also averages out the X-ray field. This allows the SNR to be increased further with increasing exposure. This has been used in dentistry to great effect, allowing subtle variations in dentine mineralisation to be visualised in 3 dimensions. It has also been used to detect ink in ancient parchments that are too damaged to physically unroll. If sufficient contrast between the ink and parchment exists, it is possible to virtually unroll the tomographic image of the scroll in order that the text can be read. Following on from this work, a feasibility test was carried out to determine if it might be possible to recover images from decaying film reels. A successful attempt was made to re-create a short film sequence from a rolled length of 16mm film using XMT. However, the "brute force" method of scaling this up to allow an entire film reel to be imaged presents a significant challenge.
Vulnerability Analysis of the MAVLink Protocol for Command and Control of Unmanned Aircraft
2013-03-27
the cheapest computers currently on the market (the $35 Raspberry Pi [New13, Upt13]) to distribute the workload, a determined attacker would incur a...cCost of Brute-Force) for 6,318 Raspberry Pi systems (x) at $82 per 3DR-enabled Raspberry Pi (RPCost of RasPi) [3DR13, New13] to brute-force all 3,790,800...NIST, 2004. [New13] Newark. Order the Raspberry Pi , November 2013. last accessed: 19 Febru- ary 2014. URL: http://www.newark.com/jsp/search
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images
NASA Astrophysics Data System (ADS)
Fedotova, S.; Seredin, O.; Kushnir, O.
2017-05-01
In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.
Permeation profiles of Antibiotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez Bautista, Cesar Augusto; Gnanakaran, Sandrasegaram
Presentation describes motivation: Combating bacterial inherent resistance; Drug development mainly uses brute force rather than rational design; Current experimental approaches lack molecular detail.
Strategy for reflector pattern calculation - Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.
1986-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
Strategy for reflector pattern calculation: Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.
1985-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
Shipboard Fluid System Diagnostics Using Non-Intrusive Load Monitoring
2007-06-01
brute.s(3).data; tDPP = brute.s(3).time; FL = brute.s(4).data; tFL = brute.s(4).time; RM = brute.s(5).data; tRM = brute.s(5).time; DPF = brute.s...s’, max(tP1), files(n).name)); ylabel(’Power’); axis tight grid on; subplot(4,1,2); plot( tDPP , DPP, tDPF, DPF) ylabel(’DP Gauges’); axis
Fast optimization algorithms and the cosmological constant
NASA Astrophysics Data System (ADS)
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Virtual ellipsometry on layered micro-facet surfaces.
Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas
2017-09-18
Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Grover Search and the No-Signaling Principle
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Jordan, Stephen P.
2016-09-01
Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.
Reconstructing the evolution of first-row transition metal minerals by GeoDeepDive
NASA Astrophysics Data System (ADS)
Liu, C.; Peters, S. E.; Ross, I.; Golden, J. J.; Downs, R. T.; Hazen, R. M.
2016-12-01
Terrestrial mineralogy evolves as a consequence of a range of physical, chemical, and biological processes [1]. Evolution of the first-row transition metal minerals could mirror the evolution of Earth's oxidation state and life, since these elements mostly are redox-sensitive and/or play critical roles in biology. The fundamental building blocks to reconstruct mineral evolution are the mineral species, locality, and age data, which are typically dispersed in sentences in scientific and technical publications. These data can be tracked down in a brute-force way, i.e., human retrieval, reading, and recording all relevant literature. Alternatively, they can be extracted automatically by GeoDeepDive. In GeoDeepDive, scientific and technical articles from publishers, including Elsevier, Wiley, USGS, SEPM, GSA and Canada Science Publishing, have been parsed into a Javascript database with NLP tags. Sentences containing data of mineral names, locations, and ages can be recognized and extracted by user-developed applications. In a preliminary search for cobalt mineral ages, we successfully extracted 678 citations with >1000 mentions of cobalt minerals, their locations, and ages. The extracted results are in agreement with brute-force search results. What is more, GeoDeepDive provides 40 additional data points that were not recovered by the brute-force approach. The extracted mineral locality-age data suggest that the evolution of Co minerals is controlled by global supercontinent cycles, i.e., more Co minerals form during episodes of supercontinent assembly. Mineral evolution of other first-row transition elements is being investigated through GeoDeepDive. References: [1] Hazen et al. (2008) Mineral evolution. American Mineralogist, 93, 1693-1720
Finding All Solutions to the Magic Hexagram
ERIC Educational Resources Information Center
Holland, Jason; Karabegov, Alexander
2008-01-01
In this article, a systematic approach is given for solving a magic star puzzle that usually is accomplished by trial and error or "brute force." A connection is made to the symmetries of a cube, thus the name Magic Hexahedron.
Probabilistic sampling of protein conformations: new hope for brute force?
Feldman, Howard J; Hogue, Christopher W V
2002-01-01
Protein structure prediction from sequence alone by "brute force" random methods is a computationally expensive problem. Estimates have suggested that it could take all the computers in the world longer than the age of the universe to compute the structure of a single 200-residue protein. Here we investigate the use of a faster version of our FOLDTRAJ probabilistic all-atom protein-structure-sampling algorithm. We have improved the method so that it is now over twenty times faster than originally reported, and capable of rapidly sampling conformational space without lattices. It uses geometrical constraints and a Leonard-Jones type potential for self-avoidance. We have also implemented a novel method to add secondary structure-prediction information to make protein-like amounts of secondary structure in sampled structures. In a set of 100,000 probabilistic conformers of 1VII, 1ENH, and 1PMC generated, the structures with smallest Calpha RMSD from native are 3.95, 5.12, and 5.95A, respectively. Expanding this test to a set of 17 distinct protein folds, we find that all-helical structures are "hit" by brute force more frequently than beta or mixed structures. For small helical proteins or very small non-helical ones, this approach should have a "hit" close enough to detect with a good scoring function in a pool of several million conformers. By fitting the distribution of RMSDs from the native state of each of the 17 sets of conformers to the extreme value distribution, we are able to estimate the size of conformational space for each. With a 0.5A RMSD cutoff, the number of conformers is roughly 2N where N is the number of residues in the protein. This is smaller than previous estimates, indicating an average of only two possible conformations per residue when sterics are accounted for. Our method reduces the effective number of conformations available at each residue by probabilistic bias, without requiring any particular discretization of residue conformational space, and is the fastest method of its kind. With computer speeds doubling every 18 months and parallel and distributed computing becoming more practical, the brute force approach to protein structure prediction may yet have some hope in the near future. Copyright 2001 Wiley-Liss, Inc.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
An N-body Integrator for Planetary Rings
NASA Astrophysics Data System (ADS)
Hahn, Joseph M.
2011-04-01
A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
Studies on a Spatialized Audio Interface for Sonar
2011-10-03
addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a
Develop a solution for protecting and securing enterprise networks from malicious attacks
NASA Astrophysics Data System (ADS)
Kamuru, Harshitha; Nijim, Mais
2014-05-01
In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be gathered. This information gathering typically takes the form of various types of scans to determine available hosts, networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis. We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new solution for securing enterprise networks will be developed here.
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
Artificial consciousness and the consciousness-attention dissociation.
Haladjian, Harry Haroutioun; Montemayor, Carlos
2016-10-01
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. Copyright © 2016 Elsevier Inc. All rights reserved.
In regulatory assessments, there is a need for reliable estimates of the impacts of precursor emissions from individual sources on secondary PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microns) and ozone. Three potential methods for estimating th...
The End of Flat Earth Economics & the Transition to Renewable Resource Societies.
ERIC Educational Resources Information Center
Henderson, Hazel
1978-01-01
A post-industrial revolution is predicted for the future with an accompanying shift of focus from simple, brute force technolgies, based on cheap, accessible resources and energy, to a second generation of more subtle, refined technologies grounded in a much deeper understanding of biological and ecological realities. (Author/BB)
Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology
Palaparthi, Anil; Riede, Tobias
2017-01-01
Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563
Nuclear spin imaging with hyperpolarized nuclei created by brute force method
NASA Astrophysics Data System (ADS)
Tanaka, Masayoshi; Kunimatsu, Takayuki; Fujiwara, Mamoru; Kohri, Hideki; Ohta, Takeshi; Utsuro, Masahiko; Yosoi, Masaru; Ono, Satoshi; Fukuda, Kohji; Takamatsu, Kunihiko; Ueda, Kunihiro; Didelez, Jean-P.; Prossati, Giorgio; de Waard, Arlette
2011-05-01
We have been developing a polarized HD target for particle physics at the SPring-8 under the leadership of the RCNP, Osaka University for the past 5 years. Nuclear polarizaton is created by means of the brute force method which uses a high magnetic field (~17 T) and a low temperature (~ 10 mK). As one of the promising applications of the brute force method to life sciences we started a new project, "NSI" (Nuclear Spin Imaging), where hyperpolarized nuclei are used for the MRI (Magnetic Resonance Imaging). The candidate nuclei with spin ½hslash are 3He, 13C, 15N, 19F, 29Si, and 31P, which are important elements for the composition of the biomolecules. Since the NMR signals from these isotopes are enhanced by orders of magnitudes, the spacial resolution in the imaging would be much more improved compared to the practical MRI used so far. Another advantage of hyperpolarized MRI is that the MRI is basically free from the radiation, while the problems of radiation exposure caused by the X-ray CT or PET (Positron Emission Tomography) cannot be neglected. In fact, the risk of cancer for Japanese due to the radiation exposure through these diagnoses is exceptionally high among the advanced countries. As the first step of the NSI project, we are developing a system to produce hyperpolarized 3He gas for the diagnosis of serious lung diseases, for example, COPD (Chronic Obstructive Pulmonary Disease). The system employs the same 3He/4He dilution refrigerator and superconducting solenoidal coil as those used for the polarized HD target with some modification allowing the 3He Pomeranchuk cooling and the following rapid melting of the polarized solid 3He to avoid the depolarization. In this report, the present and future steps of our project will be outlined with some latest experimental results.
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
Social Epistemology, the Reason of "Reason" and the Curriculum Studies
ERIC Educational Resources Information Center
Popkewitz, Thomas S.
2014-01-01
Not-with-standing the current topoi of the Knowledge Society, a particular "fact" of modernity is that power is exercised less through brute force and more through systems of reason that order and classify what is known and acted on. This article explored the system of reason that orders and classifies what is talked about, thought and…
Managing conflicts in systems development.
Barnett, E
1997-05-01
Conflict in systems development is nothing new. It can vary in intensity, but there will always be two possible outcomes--one constructive and the other destructive. The common approach to conflict management is to draw the battle lines and apply brute force. However, there are other ways to deal with conflict that are more effective and more people oriented.
Code White: A Signed Code Protection Mechanism for Smartphones
2010-09-01
analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2
Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D
2018-02-22
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).
Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.
2018-01-01
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448
1993-04-23
mechanisms that take into account this new reality. TERRORISM Lastly is the question of terrorism. There can be no two opinions on this most heinous crime ...the notion of an empire "essentially based on force" that had to be maintained, if necessary, "by brute force" see Suhash Chakravarty, The Raj Syndrome ...over power to the National League for Democracy (NLD) led by Aung San Suu Xyi , the daughter of Burma’s independence leader, Aung San. Since then, the
Constraint Optimization Literature Review
2015-11-01
COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18
Strategic Studies Quarterly. Volume 9, Number 2. Summer 2015
2015-01-01
disrupting financial markets. Among other indicators, China’s already deployed and future Type 094 Jin -ciass nuclear ballistic missile submarines (SSBN...on agility instead of brute force re- inforces traditional Chinese military thinking. Since Sun Tzu, the acme of skill has been winning without... mechanical (both political and technical) nature of digital developments. Given this, the nature of system constraints under a dif- ferent future
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Exhaustively sampling peptide adsorption with metadynamics.
Deighan, Michael; Pfaendtner, Jim
2013-06-25
Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.
ERIC Educational Resources Information Center
Van Name, Barry
2012-01-01
There is a battlefield where no quarter is given, no mercy shown, but not a single drop of blood is spilled. It is an arena that witnesses the bringing together of high-tech design and manufacture with the outpouring of brute force, under the remotely accessed command of some of today's brightest students. This is the world of battling robots, or…
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
1994-06-27
success . The key ideas behind the algorithm are: 1. Stopping when one alternative is clearly better than all the others, and 2. Focusing the search on...search algorithm has been implemented on the chess machine Hitech . En route we have developed effective techniques for: "* Dealing with independence of...report describes the implementation, and the results of tests including games played against brute- force programs. The data indicate that B* Hitech is a
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
2011-10-14
landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on
Security and matching of partial fingerprint recognition systems
NASA Astrophysics Data System (ADS)
Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.
2004-08-01
Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).
Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders
2014-10-01
The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.
2016-05-26
to Protect Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER MAJ Ashley E. Welte Se. TASK NUMBER Sf. WORK UNIT NUMBER...III, COL, IN Accepted this 26th day of May 2016 by: ___________________________________, Director, Graduate Degree Programs Robert F. Baumann, PhD The...copyright permission has been obtained for the inclusion of pictures, maps, graphics, and any other works incorporated into this manuscript. A work of the
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.
1991-01-01
This work introduces, examines and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter to spacecraft attitude determination, which is based on vector measurements. Three new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstate the performance of all four schemes.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks
NASA Astrophysics Data System (ADS)
Jia, F.; Lichti, D. D.
2018-05-01
Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.
Nonconservative dynamics in long atomic wires
NASA Astrophysics Data System (ADS)
Cunningham, Brian; Todorov, Tchavdar N.; Dundas, Daniel
2014-09-01
The effect of nonconservative current-induced forces on the ions in a defect-free metallic nanowire is investigated using both steady-state calculations and dynamical simulations. Nonconservative forces were found to have a major influence on the ion dynamics in these systems, but their role in increasing the kinetic energy of the ions decreases with increasing system length. The results illustrate the importance of nonconservative effects in short nanowires and the scaling of these effects with system size. The dependence on bias and ion mass can be understood with the help of a simple pen and paper model. This material highlights the benefit of simple preliminary steady-state calculations in anticipating aspects of brute-force dynamical simulations, and provides rule of thumb criteria for the design of stable quantum wires.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Making Classical Ground State Spin Computing Fault-Tolerant
2010-06-24
approaches to perebor (brute-force searches) algorithms,” IEEE Annals of the History of Computing, 6, 384–400 (1984). [24] D. Bacon and S . T. Flammia ...Adiabatic gate teleportation,” Phys. Rev. Lett., 103, 120504 (2009). [25] D. Bacon and S . T. Flammia , “Adiabatic cluster state quantum computing...v1 [ co nd -m at . s ta t- m ec h] 2 2 Ju n 20 10 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the
The role of the optimization process in illumination design
NASA Astrophysics Data System (ADS)
Gauvin, Michael A.; Jacobsen, David; Byrne, David J.
2015-07-01
This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.
TEAM: efficient two-locus epistasis tests in human genome-wide association study.
Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei
2010-06-15
As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.
Human problem solving performance in a fault diagnosis task
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1978-01-01
It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.
The Trailwatcher: A Collection of Colonel Mike Malone’s Writings
1982-06-21
washtub-sized turtle is boat Stand reaches but more brute force. the six eases its noose ’s head and neck. As the noose , the , short on... nebulous term for who would that?" I saw a functions: was constrain them to work on what to be down here won’t like range cards that any told me...the process never ceases. me on now our factor: mot ion. What motivates a of books that have been written on motivation handle on this nebulous term
2000-09-21
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
2000-09-21
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro
2016-08-15
Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Security enhanced BioEncoding for protecting iris codes
NASA Astrophysics Data System (ADS)
Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya
2011-06-01
Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.
Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F
2001-06-01
On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.
Password Cracking Using Sony Playstations
NASA Astrophysics Data System (ADS)
Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet
Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.
DynaGuard: Armoring Canary-Based Protections against Brute-Force Attacks
2015-12-11
public domain. Non-exclusive copying or redistribution is...sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My SQ L Sl ow do w n (n...k 456 .hm me r 458 .sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My
The new Mobile Command Center at KSC is important addition to emergency preparedness
NASA Technical Reports Server (NTRS)
2000-01-01
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed '''The Brute,''' it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.
Shortest path problem on a grid network with unordered intermediate points
NASA Astrophysics Data System (ADS)
Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen
2017-10-01
We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.
NASA Astrophysics Data System (ADS)
Kim, E.; Kim, S.; Kim, H. C.; Kim, B. U.; Cho, J. H.; Woo, J. H.
2017-12-01
In this study, we investigated the contributions of major emission source categories located upwind of South Korea to Particulate Matter (PM) in South Korea. In general, air quality in South Korea is affected by anthropogenic air pollutants emitted from foreign countries including China. Some studies reported that foreign emissions contributed 50 % of annual surface PM total mass concentrations in the Seoul Metropolitan Area, South Korea in 2014. Previous studies examined PM contributions of foreign emissions from all sectors considering meteorological variations. However, little studies conducted to assess contributions of specific foreign source categories. Therefore, we attempted to estimate sectoral contributions of foreign emissions from China to South Korea PM using our air quality forecasting system. We used Model Inter-Comparison Study in Asia 2010 for foreign emissions and Clean Air Policy Support System 2010 emission inventories for domestic emissions. To quantify contributions of major emission sectors to South Korea PM, we applied the Community Multi-scale Air Quality system with brute force method by perturbing emissions from industrial, residential, fossil-fuel power plants, transportation, and agriculture sectors in China. We noted that industrial sector was pre-dominant over the region except during cold season for primary PMs when residential emissions drastically increase due to heating demand. This study will benefit ensemble air quality forecasting and refined control strategy design by providing quantitative assessment on seasonal contributions of foreign emissions from major source categories.
"The Et Tu Brute Complex" Compulsive Self Betrayal
ERIC Educational Resources Information Center
Antus, Robert Lawrence
2006-01-01
In this article, the author discusses "The Et Tu Brute Complex." More specifically, this phenomenon occurs when a person, instead of supporting and befriending himself, orally condemns himself in front of other people and becomes his own worst enemy. This is a form of compulsive self-hatred. Most often, the victim of this complex is unaware of the…
The new Mobile Command Center at KSC is important addition to emergency preparedness
NASA Technical Reports Server (NTRS)
2000-01-01
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed '''The Brute,''' serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.
A nonperturbative approximation for the moderate Reynolds number Navier–Stokes equations
Roper, Marcus; Brenner, Michael P.
2009-01-01
The nonlinearity of the Navier–Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier–Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes. PMID:19211800
A nonperturbative approximation for the moderate Reynolds number Navier-Stokes equations.
Roper, Marcus; Brenner, Michael P
2009-03-03
The nonlinearity of the Navier-Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier-Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes.
2000-09-21
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
2000-09-21
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design
NASA Technical Reports Server (NTRS)
Wallace, Mark S.
2015-01-01
The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
Predictive ecology: systems approaches
Evans, Matthew R.; Norris, Ken J.; Benton, Tim G.
2012-01-01
The world is experiencing significant, largely anthropogenically induced, environmental change. This will impact on the biological world and we need to be able to forecast its effects. In order to produce such forecasts, ecology needs to become more predictive—to develop the ability to understand how ecological systems will behave in future, changed, conditions. Further development of process-based models is required to allow such predictions to be made. Critical to the development of such models will be achieving a balance between the brute-force approach that naively attempts to include everything, and over simplification that throws out important heterogeneities at various levels. Central to this will be the recognition that individuals are the elementary particles of all ecological systems. As such it will be necessary to understand the effect of evolution on ecological systems, particularly when exposed to environmental change. However, insights from evolutionary biology will help the development of models even when data may be sparse. Process-based models are more common, and are used for forecasting, in other disciplines, e.g. climatology and molecular systems biology. Tools and techniques developed in these endeavours can be appropriated into ecological modelling, but it will also be necessary to develop the science of ecoinformatics along with approaches specific to ecological problems. The impetus for this effort should come from the demand coming from society to understand the effects of environmental change on the world and what might be performed to mitigate or adapt to them. PMID:22144379
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2014-01-29
We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
Use of EPANET solver to manage water distribution in Smart City
NASA Astrophysics Data System (ADS)
Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.
2018-02-01
Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.
Tinghög, Gustav; Andersson, David; Västfjäll, Daniel
2017-01-01
According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals’ deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals’ fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals (n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent (p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth. PMID:28424641
Tinghög, Gustav; Andersson, David; Västfjäll, Daniel
2017-01-01
According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals' deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals' fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals ( n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent ( p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth.
Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn
2016-01-01
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
Quaternion normalization in additive EKF for spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.
1991-01-01
This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.
Morphodynamic data assimilation used to understand changing coasts
Plant, Nathaniel G.; Long, Joseph W.
2015-01-01
Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.
NASA Astrophysics Data System (ADS)
Basri, M.; Mawengkang, H.; Zamzami, E. M.
2018-03-01
Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients
NASA Astrophysics Data System (ADS)
Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.
2017-09-01
We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.
A Formal Algorithm for Routing Traces on a Printed Circuit Board
NASA Technical Reports Server (NTRS)
Hedgley, David R., Jr.
1996-01-01
This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.
Zimmerman, M I; Bowman, G R
2016-01-01
Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.
Penn, Alexandra S
2016-01-01
Understanding and manipulating bacterial biofilms is crucial in medicine, ecology and agriculture and has potential applications in bioproduction, bioremediation and bioenergy. Biofilms often resist standard therapies and the need to develop new means of intervention provides an opportunity to fundamentally rethink our strategies. Conventional approaches to working with biological systems are, for the most part, "brute force", attempting to effect control in an input and effort intensive manner and are often insufficient when dealing with the inherent non-linearity and complexity of living systems. Biological systems, by their very nature, are dynamic, adaptive and resilient and require management tools that interact with dynamic processes rather than inert artefacts. I present an overview of a novel engineering philosophy which aims to exploit rather than fight those properties, and hence provide a more efficient and robust alternative. Based on a combination of evolutionary theory and whole-systems design, its essence is what I will call systems aikido; the basic principle of aikido being to interact with the momentum of an attacker and redirect it with minimal energy expenditure, using the opponent's energy rather than one's own. In more conventional terms, this translates to a philosophy of equilibrium engineering, manipulating systems' own self-organisation and evolution so that the evolutionarily or dynamically stable state corresponds to a function which we require. I illustrate these ideas with a description of a proposed manipulation of environmental conditions to alter the stability of co-operation in the context of Pseudomonas aeruginosa biofilm infection of the cystic fibrosis lung.
Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jung Heon; Wu, Kesheng; Simon, Horst D.
2014-03-01
VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by themore » false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.« less
Child Brides, Forced Marriage, and Partner Violence in America: Tip of an Iceberg Revealed.
McFarlane, Judith; Nava, Angeles; Gilroy, Heidi; Maddoux, John
2016-04-01
Forced marriage is a violation of human rights and thwarts personal safety and well-being. Child brides are at higher risk of intimate partner violence (IPV) and often are unable to effectively negotiate safe sex, leaving them vulnerable to sexually transmitted infections, including human immunodeficiency virus, and early pregnancy. The prevalence of forced marriage and child marriage in the United States is unknown. The intersection of forced marriage and child marriage and IPV is equally unknown. When 277 mothers who reported IPV to shelter or justice services were asked about a forced marriage attempt, frequency and severity of IPV, mental health status, and behavioral functioning of their child, 47 (17%) reported a forced marriage attempt with 45% of the women younger than 18 years of age at the time of the attempt. Among the 47 women, 11 (23%) reported death threats, 20 (43%) reported marriage to the person, and 28 (60%) reported a pregnancy. Women younger than 18 years reported more threats of isolation and economic deprivation associated with the attempt as well as pressure from parents to marry. Regardless of age, women experiencing a forced marriage attempt reported more intimate partner sexual abuse, somatization, and behavior problems for their children. Forced marriage attempts occurred to one in six women (17%) reporting IPV and are associated with worse functioning for mother and child. The frequent occurrence and associated effect of forced marriage attempts to maternal child functioning indicates routine assessment for a forced marriage attempt as part of comprehensive care for women reporting IPV.
Smiley, CalvinJohn; Fakunle, David
The synonymy of Blackness with criminality is not a new phenomenon in America. Documented historical accounts have shown how myths, stereotypes, and racist ideologies led to discriminatory policies and court rulings that fueled racial violence in a post-Reconstruction era and has culminated in the exponential increase of Black male incarceration today. Misconceptions and prejudices manufactured and disseminated through various channels such as the media included references to a " brute " image of Black males. In the 21 st century, this negative imagery of Black males has frequently utilized the negative connotation of the terminology " thug ." In recent years, law enforcement agencies have unreasonably used deadly force on Black males allegedly considered to be "suspects" or "persons of interest." The exploitation of these often-targeted victims' criminal records, physical appearances, or misperceived attributes has been used to justify their unlawful deaths. Despite the connection between disproportionate criminality and Black masculinity, little research has been done on how unarmed Black male victims, particularly but not exclusively at the hands of law enforcement, have been posthumously criminalized. This paper investigates the historical criminalization of Black males and its connection to contemporary unarmed victims of law enforcement. Action research methodology in the data collection process is utilized to interpret how Black male victims are portrayed by traditional mass media, particularly through the use of language, in ways that marginalize and de-victimize these individuals. This study also aims to elucidate a contemporary understanding of race relations, racism, and the plight of the Black male in a 21-century "post-racial" America.
Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans
2008-04-28
Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.
Step to improve neural cryptography against flipping attacks.
Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold
2004-12-01
Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M
2013-01-01
This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Automated design of genomic Southern blot probes
2010-01-01
Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to experimentally validate a number of these automated designs by Southern blotting. The majority of probes we tested performed well confirming our in silico prediction methodology and the general usefulness of the software for automated genomic Southern probe design. Conclusions Software and supplementary information are freely available at: http://www.genes2cognition.org/software/southern_blot PMID:20113467
SIMBAD : a sequence-independent molecular-replacement pipeline
Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...
2018-06-08
The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less
Low-field thermal mixing in [1-(13)C] pyruvic acid for brute-force hyperpolarization.
Peat, David T; Hirsch, Matthew L; Gadian, David G; Horsewill, Anthony J; Owers-Bradley, John R; Kempf, James G
2016-07-28
We detail the process of low-field thermal mixing (LFTM) between (1)H and (13)C nuclei in neat [1-(13)C] pyruvic acid at cryogenic temperatures (4-15 K). Using fast-field-cycling NMR, (1)H nuclei in the molecule were polarized at modest high field (2 T) and then equilibrated with (13)C nuclei by fast cycling (∼300-400 ms) to a low field (0-300 G) that activates thermal mixing. The (13)C NMR spectrum was recorded after fast cycling back to 2 T. The (13)C signal derives from (1)H polarization via LFTM, in which the polarized ('cold') proton bath contacts the unpolarised ('hot') (13)C bath at a field so low that Zeeman and dipolar interactions are similar-sized and fluctuations in the latter drive (1)H-(13)C equilibration. By varying mixing time (tmix) and field (Bmix), we determined field-dependent rates of polarization transfer (1/τ) and decay (1/T1m) during mixing. This defines conditions for effective mixing, as utilized in 'brute-force' hyperpolarization of low-γ nuclei like (13)C using Boltzmann polarization from nearby protons. For neat pyruvic acid, near-optimum mixing occurs for tmix∼ 100-300 ms and Bmix∼ 30-60 G. Three forms of frozen neat pyruvic acid were tested: two glassy samples, (one well-deoxygenated, the other O2-exposed) and one sample pre-treated by annealing (also well-deoxygenated). Both annealing and the presence of O2 are known to dramatically alter high-field longitudinal relaxation (T1) of (1)H and (13)C (up to 10(2)-10(3)-fold effects). Here, we found smaller, but still critical factors of ∼(2-5)× on both τ and T1m. Annealed, well-deoxygenated samples exhibit the longest time constants, e.g., τ∼ 30-70 ms and T1m∼ 1-20 s, each growing vs. Bmix. Mixing 'turns off' for Bmix > ∼100 G. That T1m≫τ is consistent with earlier success with polarization transfer from (1)H to (13)C by LFTM.
SIMBAD : a sequence-independent molecular-replacement pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.
The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less
Faint Debris Detection by Particle Based Track-Before-Detect Method
NASA Astrophysics Data System (ADS)
Uetsuhara, M.; Ikoma, N.
2014-09-01
This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.
Prospective Optimization with Limited Resources
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-01-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309
Prospective Optimization with Limited Resources.
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-09-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.
A comparison of approaches for finding minimum identifying codes on graphs
NASA Astrophysics Data System (ADS)
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
Crystal nucleation and metastable bcc phase in charged colloids: A molecular dynamics study
NASA Astrophysics Data System (ADS)
Ji, Xinqiang; Sun, Zhiwei; Ouyang, Wenze; Xu, Shenghua
2018-05-01
The dynamic process of homogenous nucleation in charged colloids is investigated by brute-force molecular dynamics simulation. To check if the liquid-solid transition will pass through metastable bcc, simulations are performed at the state points that definitely lie in the phase region of thermodynamically stable fcc. The simulation results confirm that, in all of these cases, the preordered precursors, acting as the seeds of nucleation, always have predominant bcc symmetry consistent with Ostwald's step rule and the Alexander-McTague mechanism. However, the polymorph selection is not straightforward because the crystal structures formed are not often determined by the symmetry of intermediate precursors but have different characters under different state points. The region of the state point where bcc crystal structures of large enough size are formed during crystallization is narrow, which gives a reasonable explanation as to why the metastable bcc phase in charged colloidal suspensions is rarely detected in macroscopic experiments.
NASA Astrophysics Data System (ADS)
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
Ab Initio Effective Rovibrational Hamiltonians for Non-Rigid Molecules via Curvilinear VMP2
NASA Astrophysics Data System (ADS)
Changala, Bryan; Baraban, Joshua H.
2017-06-01
Accurate predictions of spectroscopic constants for non-rigid molecules are particularly challenging for ab initio theory. For all but the smallest systems, ``brute force'' diagonalization of the full rovibrational Hamiltonian is computationally prohibitive, leaving us at the mercy of perturbative approaches. However, standard perturbative techniques, such as second order vibrational perturbation theory (VPT2), are based on the approximation that a molecule makes small amplitude vibrations about a well defined equilibrium structure. Such assumptions are physically inappropriate for non-rigid systems. In this talk, we will describe extensions to curvilinear vibrational Møller-Plesset perturbation theory (VMP2) that account for rotational and rovibrational effects in the molecular Hamiltonian. Through several examples, we will show that this approach provides predictions to nearly microwave accuracy of molecular constants including rotational and centrifugal distortion parameters, Coriolis coupling constants, and anharmonic vibrational and tunneling frequencies.
NASA Astrophysics Data System (ADS)
Zhou, Xingyu; Zhuge, Qunbi; Qiu, Meng; Xiang, Meng; Zhang, Fangyuan; Wu, Baojian; Qiu, Kun; Plant, David V.
2018-02-01
We investigate the capacity improvement achieved by bandwidth variable transceivers (BVT) in meshed optical networks with cascaded ROADM filtering at fixed channel spacing, and then propose an artificial neural network (ANN)-aided provisioning scheme to select optimal symbol rate and modulation format for the BVTs in this scenario. Compared with a fixed symbol rate transceiver with standard QAMs, it is shown by both experiments and simulations that BVTs can increase the average capacity by more than 17%. The ANN-aided BVT provisioning method uses parameters monitored from a coherent receiver and then employs a trained ANN to transform these parameters into the desired configuration. It is verified by simulation that the BVT with the proposed provisioning method can approach the upper limit of the system capacity obtained by brute-force search under various degrees of flexibilities.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots.
Kazemi, Masoud; Åqvist, Johan
2015-06-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots
Kazemi, Masoud; Åqvist, Johan
2015-01-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies. PMID:26028237
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Unsteady flow sensing and optimal sensor placement using machine learning
NASA Astrophysics Data System (ADS)
Semaan, Richard
2016-11-01
Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.
Tag SNP selection via a genetic algorithm.
Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh
2010-10-01
Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.
Decision and function problems based on boson sampling
NASA Astrophysics Data System (ADS)
Nikolopoulos, Georgios M.; Brougham, Thomas
2016-07-01
Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.
An investigation of school violence through Turkish children's drawings.
Yurtal, Filiz; Artut, Kazim
2010-01-01
This study investigates Turkish children's perception of violence in school as represented through drawings and narratives. In all, 66 students (12 to 13 years old) from the middle socioeconomic class participated. To elicit children's perception of violence, they were asked to draw a picture of a violent incident they had heard, experienced, or witnessed. Children mostly drew pictures of violent events among children (33 pictures). Also, there were pictures of violent incidents perpetrated by teachers and directors against children. It was observed that violence influenced children. Violence was mostly depicted in school gardens (38 pictures), but there were violent incidents everywhere, such as in classrooms, corridors, and school stores as well. Moreover, it was found that brute force was the most referred way of violence in the children's depictions (38 pictures). In conclusion, children clearly indicated that there was violence in schools and they were affected by it.
Advances in atmospheric light scattering theory and remote-sensing techniques
NASA Astrophysics Data System (ADS)
Videen, Gorden; Sun, Wenbo; Gong, Wei
2017-02-01
This issue focuses especially on characterizing particles in the Earth-atmosphere system. The significant role of aerosol particles in this system was recognized in the mid-1970s [1]. Since that time, our appreciation for the role they play has only increased. It has been and continues to be one of the greatest unknown factors in the Earth-atmosphere system as evidenced by the most recent Intergovernmental Panel on Climate Change (IPCC) assessments [2]. With increased computational capabilities, in terms of both advanced algorithms and in brute-force computational power, more researchers have the tools available to address different aspects of the role of aerosols in the atmosphere. In this issue, we focus on recent advances in this topical area, especially the role of light scattering and remote sensing. This issue follows on the heels of four previous topical issues on this subject matter that have graced the pages of this journal [3-6].
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
Aspects of warped AdS3/CFT2 correspondence
NASA Astrophysics Data System (ADS)
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz
NASA Astrophysics Data System (ADS)
Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.
2017-03-01
We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao
2018-04-01
A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Neural-network quantum state tomography
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe
2018-05-01
The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Optical image encryption system using nonlinear approach based on biometric authentication
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Sinha, Aloka
2017-07-01
A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
Zhou, Y.; Ojeda-May, P.; Nagaraju, M.; Pu, J.
2016-01-01
Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path–force matching (RP–FM) has been developed. In RP–FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP–FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639
Dynamics of neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less
Transport and imaging of brute-force 13C hyperpolarization
NASA Astrophysics Data System (ADS)
Hirsch, Matthew L.; Smith, Bryce A.; Mattingly, Mark; Goloshevsky, Artem G.; Rosay, Melanie; Kempf, James G.
2015-12-01
We demonstrate transport of hyperpolarized frozen 1-13C pyruvic acid from its site of production to a nearby facility, where a time series of 13C images was acquired from the aqueous dissolution product. Transportability is tied to the hyperpolarization (HP) method we employ, which omits radical electron species used in other approaches that would otherwise relax away the HP before reaching the imaging center. In particular, we attained 13C HP by 'brute-force', i.e., using only low temperature and high-field (e.g., T < ∼2 K and B ∼ 14 T) to pre-polarize protons to a large Boltzmann value (∼0.4% 1H polarization). After polarizing the neat, frozen sample, ejection quickly (<1 s) passed it through a low field (B < 100 G) to establish the 1H pre-polarization spin temperature on 13C via the process known as low-field thermal mixing (yielding ∼0.1% 13C polarization). By avoiding polarization agents (a.k.a. relaxation agents) that are needed to hyperpolarize by the competing method of dissolution dynamic nuclear polarization (d-DNP), the 13C relaxation time was sufficient to transport the sample for ∼10 min before finally dissolving in warm water and obtaining a 13C image of the hyperpolarized, dilute, aqueous product (∼0.01% 13C polarization, a >100-fold gain over thermal signals in the 1 T scanner). An annealing step, prior to polarizing the sample, was also key for increasing T1 ∼ 30-fold during transport. In that time, HP was maintained using only modest cryogenics and field (T ∼ 60 K and B = 1.3 T), for T1(13C) near 5 min. Much greater time and distance (with much smaller losses) may be covered using more-complete annealing and only slight improvements on transport conditions (e.g., yielding T1 ∼ 5 h at 30 K, 2 T), whereas even intercity transfer is possible (T1 > 20 h) at reasonable conditions of 6 K and 2 T. Finally, it is possible to increase the overall enhancement near d-DNP levels (i.e., 102-fold more) by polarizing below 100 mK, where nanoparticle agents are known to hasten T1 buildup by 100-fold, and to yield very little impact on T1 losses at temperatures relevant to transport.
Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS
NASA Astrophysics Data System (ADS)
Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.
2011-12-01
Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.
Excessive force during removal of immigration detainees.
Granville-Chapman, Charlotte; Smith, Ellie; Moloney, Neil
2005-08-01
Use of force against immigration detainees during attempts to expel them from the UK must be limited to that which is strictly necessary and proportionate under the circumstances, using accepted methods of restraint designed to minimise injury risk to all concerned. Fourteen cases are reported after failed removal attempts, where there were allegations that excessive force had been employed. Collective analysis of the 14 cases reveals a misuse of handcuffs in 11 cases with resulting nerve injury in 4 cases, the use of inappropriate and unsafe methods of force, such as blows to the head and compression of the trunk and/or neck, and continued use of force even after termination of the deportation attempt, occurring inside security company vehicles out of sight of witnesses. An analysis of the legal implications for the government and recommendations aimed at eradication of abusive practices are given.
On the Privacy Protection of Biometric Traits: Palmprint, Face, and Signature
NASA Astrophysics Data System (ADS)
Panigrahy, Saroj Kumar; Jena, Debasish; Korra, Sathya Babu; Jena, Sanjay Kumar
Biometrics are expected to add a new level of security to applications, as a person attempting access must prove who he or she really is by presenting a biometric to the system. The recent developments in the biometrics area have lead to smaller, faster and cheaper systems, which in turn has increased the number of possible application areas for biometric identity verification. The biometric data, being derived from human bodies (and especially when used to identify or verify those bodies) is considered personally identifiable information (PII). The collection, use and disclosure of biometric data — image or template, invokes rights on the part of an individual and obligations on the part of an organization. As biometric uses and databases grow, so do concerns that the personal data collected will not be used in reasonable and accountable ways. Privacy concerns arise when biometric data are used for secondary purposes, invoking function creep, data matching, aggregation, surveillance and profiling. Biometric data transmitted across networks and stored in various databases by others can also be stolen, copied, or otherwise misused in ways that can materially affect the individual involved. As Biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analysed before they are massively deployed in security systems. Along with security, also the privacy of the users is an important factor as the constructions of lines in palmprints contain personal characteristics, from face images a person can be recognised, and fake signatures can be practised by carefully watching the signature images available in the database. We propose a cryptographic approach to encrypt the images of palmprints, faces, and signatures by an advanced Hill cipher technique for hiding the information in the images. It also provides security to these images from being attacked by above mentioned attacks. So, during the feature extraction, the encrypted images are first decrypted, then the features are extracted, and used for identification or verification.
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
A chaotic cryptosystem for images based on Henon and Arnold cat map.
Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan
2014-01-01
The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Alexakis, Alexandros
2018-02-01
The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.
Diagnosing the decline in pharmaceutical R&D efficiency.
Scannell, Jack W; Blanckley, Alex; Boldon, Helen; Warrington, Brian
2012-03-01
The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Verification Test of Automated Robotic Assembly of Space Truss Structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1995-01-01
A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
NASA Astrophysics Data System (ADS)
Millour, Florentin A.; Vannier, Martin; Meilland, Anthony
2012-07-01
We present here three recipes for getting better images with optical interferometers. Two of them, Low- Frequencies Filling and Brute-Force Monte Carlo were used in our participation to the Interferometry Beauty Contest this year and can be applied to classical imaging using V2 and closure phases. These two addition to image reconstruction provide a way of having more reliable images. The last recipe is similar in its principle as the self-calibration technique used in radio-interferometry. We call it also self-calibration, but it uses the wavelength-differential phase as a proxy of the object phase to build-up a full-featured complex visibility set of the observed object. This technique needs a first image-reconstruction run with an available software, using closure-phases and squared visibilities only. We used it for two scientific papers with great success. We discuss here the pros and cons of such imaging technique.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Computational exploration of neuron and neural network models in neurobiology.
Prinz, Astrid A
2007-01-01
The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.
Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters
NASA Astrophysics Data System (ADS)
Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.
A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map
Sundararajan, Elankovan
2014-01-01
The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724
Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain
2015-06-09
Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.
Impact-Actuated Digging Tool for Lunar Excavation
NASA Technical Reports Server (NTRS)
Wilson, Jak; Chu, Philip; Craft, Jack; Zacny, Kris; Santoro, Chris
2013-01-01
NASA s plans for a lunar outpost require extensive excavation. The Lunar Surface Systems Project Office projects that thousands of tons of lunar soil will need to be moved. Conventional excavators dig through soil by brute force, and depend upon their substantial weight to react to the forces generated. This approach will not be feasible on the Moon for two reasons: (1) gravity is 1/6th that on Earth, which means that a kg on the Moon will supply 1/6 the down force that it does on Earth, and (2) transportation costs (at the time of this reporting) of $50K to $100K per kg make massive excavators economically unattractive. A percussive excavation system was developed for use in vacuum or nearvacuum environments. It reduces the down force needed for excavation by an order of magnitude by using percussion to assist in soil penetration and digging. The novelty of this excavator is that it incorporates a percussive mechanism suited to sustained operation in a vacuum environment. A percussive digger breadboard was designed, built, and successfully tested under both ambient and vacuum conditions. The breadboard was run in vacuum to more than 2..times the lifetime of the Apollo Lunar Surface Drill, throughout which the mechanism performed and held up well. The percussive digger was demonstrated to reduce the force necessary for digging in lunar soil simulant by an order of magnitude, providing reductions as high as 45:1. This is an enabling technology for lunar site preparation and ISRU (In Situ Resource Utilization) mining activities. At transportation costs of $50K to $100K per kg, reducing digging forces by an order of magnitude translates into billions of dollars saved by not launching heavier systems to accomplish excavation tasks necessary to the establishment of a lunar outpost. Applications on the lunar surface include excavation for habitats, construction of roads, landing pads, berms, foundations, habitat shielding, and ISRU.
Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty
A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...
Linear and Branched PEIs (Polyethylenimines) and Their Property Space.
Lungu, Claudiu N; Diudea, Mircea V; Putz, Mihai V; Grudziński, Ireneusz P
2016-04-13
A chemical property space defines the adaptability of a molecule to changing conditions and its interaction with other molecular systems determining a pharmacological response. Within a congeneric molecular series (compounds with the same derivatization algorithm and thus the same brute formula) the chemical properties vary in a monotonic manner, i.e., congeneric compounds share the same chemical property space. The chemical property space is a key component in molecular design, where some building blocks are functionalized, i.e., derivatized, and eventually self-assembled in more complex systems, such as enzyme-ligand systems, of which (physico-chemical) properties/bioactivity may be predicted by QSPR/QSAR (quantitative structure-property/activity relationship) studies. The system structure is determined by the binding type (temporal/permanent; electrostatic/covalent) and is reflected in its local electronic (and/or magnetic) properties. Such nano-systems play the role of molecular devices, important in nano-medicine. In the present article, the behavior of polyethylenimine (PEI) macromolecules (linear LPEI and branched BPEI, respectively) with respect to the glucose oxidase enzyme GOx is described in terms of their (interacting) energy, geometry and topology, in an attempt to find the best shape and size of PEIs to be useful for a chosen (nanochemistry) purpose.
Linear and Branched PEIs (Polyethylenimines) and Their Property Space
Lungu, Claudiu N.; Diudea, Mircea V.; Putz, Mihai V.; Grudziński, Ireneusz P.
2016-01-01
A chemical property space defines the adaptability of a molecule to changing conditions and its interaction with other molecular systems determining a pharmacological response. Within a congeneric molecular series (compounds with the same derivatization algorithm and thus the same brute formula) the chemical properties vary in a monotonic manner, i.e., congeneric compounds share the same chemical property space. The chemical property space is a key component in molecular design, where some building blocks are functionalized, i.e., derivatized, and eventually self-assembled in more complex systems, such as enzyme-ligand systems, of which (physico-chemical) properties/bioactivity may be predicted by QSPR/QSAR (quantitative structure-property/activity relationship) studies. The system structure is determined by the binding type (temporal/permanent; electrostatic/covalent) and is reflected in its local electronic (and/or magnetic) properties. Such nano-systems play the role of molecular devices, important in nano-medicine. In the present article, the behavior of polyethylenimine (PEI) macromolecules (linear LPEI and branched BPEI, respectively) with respect to the glucose oxidase enzyme GOx is described in terms of their (interacting) energy, geometry and topology, in an attempt to find the best shape and size of PEIs to be useful for a chosen (nanochemistry) purpose. PMID:27089324
General purpose force doctrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weltman, J.J.
In contemporary American strategic parlance, the general purpose forces have come to mean those forces intended for conflict situations other than nuclear war with the Soviet Union. As with all military forces, the general purpose forces are powerfully determined by prevailing conceptions of the problems they must meet and by institutional biases as to the proper way to deal with those problems. This paper deals with the strategic problems these forces are intended to meet, the various and often conflicting doctrines and organizational structures which have been generated in order to meet those problems, and the factors which will influencemore » general purpose doctrine and structure in the future. This paper does not attempt to prescribe technological solutions to the needs of the general purpose forces. Rather, it attempts to display the doctrinal and institutional context within which new technologies must operate, and which will largely determine whether these technologies are accepted into the force structure or not.« less
Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.
Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262
Analysis of data on Air Force personnel collected at Lackland Air Force Base
DOT National Transportation Integrated Search
1969-10-01
In July, 1967, a report was published by the Personnel Research Laboratory, Lackland Air Force Base, entitled "An Attempt to Predict Automobile Accidents Among Air Force Personnnel". Approximately twelve thousand basic airmen and eleven hundred offic...
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-01
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.
Automatic Design of Digital Synthetic Gene Circuits
Marchisio, Mario A.; Stelling, Jörg
2011-01-01
De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
On grey levels in random CAPTCHA generation
NASA Astrophysics Data System (ADS)
Newton, Fraser; Kouritzin, Michael A.
2011-06-01
A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.
NASA Astrophysics Data System (ADS)
Müller, Rolf
2011-10-01
Bats have evolved one of the most capable and at the same time parsimonious sensory systems found in nature. Using active and passive biosonar as a major - and often sufficient - far sense, different bat species are able to master a wide variety of sensory tasks under very dissimilar sets of constraints. Given the limited computational resources of the bat's brain, this performance is unlikely to be explained as the result of brute-force, black-box-style computations. Instead, the animals must rely heavily on in-built physics knowledge in order to ensure that all required information is encoded reliably into the acoustic signals received at the ear drum. To this end, bats can manipulate the emitted and received signals in the physical domain: By diffracting the outgoing and incoming ultrasonic waves with intricate baffle shapes (i.e., noseleaves and outer ears), the animals can generate selectivity filters that are joint functions of space and frequency. To achieve this, bats employ structural features such as resonance cavities and diffracting ridges. In addition, some bat species can dynamically adjust the shape of their selectivity filters through muscular actuation.
A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher
NASA Astrophysics Data System (ADS)
Kumari, Manju; Gupta, Shailender
2018-03-01
As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.
Proteinortho: detection of (co-)orthologs in large-scale analysis.
Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J
2011-04-28
Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.
Uncovering molecular processes in crystal nucleation and growth by using molecular simulation.
Anwar, Jamshed; Zahn, Dirk
2011-02-25
Exploring nucleation processes by molecular simulation provides a mechanistic understanding at the atomic level and also enables kinetic and thermodynamic quantities to be estimated. However, whilst the potential for modeling crystal nucleation and growth processes is immense, there are specific technical challenges to modeling. In general, rare events, such as nucleation cannot be simulated using a direct "brute force" molecular dynamics approach. The limited time and length scales that are accessible by conventional molecular dynamics simulations have inspired a number of advances to tackle problems that were considered outside the scope of molecular simulation. While general insights and features could be explored from efficient generic models, new methods paved the way to realistic crystal nucleation scenarios. The association of single ions in solvent environments, the mechanisms of motif formation, ripening reactions, and the self-organization of nanocrystals can now be investigated at the molecular level. The analysis of interactions with growth-controlling additives gives a new understanding of functionalized nanocrystals and the precipitation of composite materials. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Clustering biomolecular complexes by residue contacts similarity.
Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2012-07-01
Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Lilienfeld-Toal, Otto Anatole
2010-11-01
The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less
Can genetic algorithms help virus writers reshape their creations and avoid detection?
NASA Astrophysics Data System (ADS)
Abu Doush, Iyad; Al-Saleh, Mohammed I.
2017-11-01
Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.
Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
NASA Astrophysics Data System (ADS)
Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan
2017-12-01
As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
Clements-Nolle, Kristen; Marx, Rani; Katz, Mitchell
2006-01-01
To determine the independent predictors of attempted suicide among transgender persons we interviewed 392 male-to-female (MTF) and 123 female-to-male (FTM) individuals. Participants were recruited through targeted sampling, respondent-driven sampling, and agency referrals in San Francisco. The prevalence of attempted suicide was 32% (95% CI = 28% to 36%). In multivariate logistic regression analysis younger age (<25 years), depression, a history of substance abuse treatment, a history of forced sex, gender-based discrimination, and gender-based victimization were independently associated with attempted suicide. Suicide prevention interventions for transgender persons are urgently needed, particularly for young people. Medical, mental health, and social service providers should address depression, substance abuse, and forced sex in an attempt to reduce suicidal behaviors among transgender persons. In addition, increasing societal acceptance of the transgender community and decreasing gender-based prejudice may help prevent suicide in this highly stigmatized population.
ERIC Educational Resources Information Center
Snarr, Jeffery D.; Heyman, Richard E.; Slep, Amy M. Smith
2010-01-01
One-year prevalences of self-reported noteworthy suicidal ideation and nonfatal suicide attempts were assessed in a large sample of U.S. Air Force active duty members (N = 52,780). Participants completed the 2006 Community Assessment, which was conducted online. Over 3% of male and 5.5% of female participants reported having experienced noteworthy…
Efficiency and Safety: The Best Time to Valve a Plaster Cast.
Steiner, Samuel R H; Gendi, Kirollos; Halanski, Matthew A; Noonan, Kenneth J
2018-04-18
The act of applying, univalving, and spreading a plaster cast to accommodate swelling is commonly performed; however, cast saws can cause thermal and/or abrasive injury to the patient. This study aims to identify the optimal time to valve a plaster cast so as to reduce the risk of cast-saw injury and increase spreading efficiency. Plaster casts were applied to life-sized pediatric models and were univalved at set-times of 5, 8, 12, or 25 minutes. Outcome measures included average and maximum force applied during univalving, blade-to-skin touches, cut time, force needed to spread, number of spread attempts, spread completeness, spread distance, saw blade temperature, and skin surface temperature. Casts allowed to set for ≥12 minutes had significantly fewer blade-to-skin touches compared with casts that set for <12 minutes (p < 0.001). For average and maximum saw blade force, no significant difference was observed between individual set-times. However, in a comparison of the shorter group (<12 minutes) and the longer group (≥12 minutes), the longer group had a higher average force (p = 0.009) but a lower maximum force (p = 0.036). The average temperature of the saw blade did not vary between groups. The maximum force needed to "pop," or spread, the cast was greater for the 5-minute and 8-minute set-times. Despite requiring more force to spread the cast, 0% of attempts at 5 minutes and 54% of attempts at 8 minutes were successful in completely spreading the cast, whereas 100% of attempts at 12 and 25 minutes were successful. The spread distance was greatest for the 12-minute set-time at 5.7 mm. Allowing casts to set for 12 minutes is associated with decreased blade-to-skin contact, less maximum force used with the saw blade, and a more effective spread. Adherence to the 12-minute interval could allow for fewer cast-saw injuries and more effective spreading.
Learning to push and learning to move: the adaptive control of contact forces
Casadio, Maura; Pressman, Assaf; Mussa-Ivaldi, Ferdinando A.
2015-01-01
To be successful at manipulating objects one needs to apply simultaneously well controlled movements and contact forces. We present a computational theory of how the brain may successfully generate a vast spectrum of interactive behaviors by combining two independent processes. One process is competent to control movements in free space and the other is competent to control contact forces against rigid constraints. Free space and rigid constraints are singularities at the boundaries of a continuum of mechanical impedance. Within this continuum, forces and motions occur in “compatible pairs” connected by the equations of Newtonian dynamics. The force applied to an object determines its motion. Conversely, inverse dynamics determine a unique force trajectory from a movement trajectory. In this perspective, we describe motor learning as a process leading to the discovery of compatible force/motion pairs. The learned compatible pairs constitute a local representation of the environment's mechanics. Experiments on force field adaptation have already provided us with evidence that the brain is able to predict and compensate the forces encountered when one is attempting to generate a motion. Here, we tested the theory in the dual case, i.e., when one attempts at applying a desired contact force against a simulated rigid surface. If the surface becomes unexpectedly compliant, the contact point moves as a function of the applied force and this causes the applied force to deviate from its desired value. We found that, through repeated attempts at generating the desired contact force, subjects discovered the unique compatible hand motion. When, after learning, the rigid contact was unexpectedly restored, subjects displayed after effects of learning, consistent with the concurrent operation of a motion control system and a force control system. Together, theory and experiment support a new and broader view of modularity in the coordinated control of forces and motions. PMID:26594163
Predicting climate change: Uncertainties and prospects for surmounting them
NASA Astrophysics Data System (ADS)
Ghil, Michael
2008-03-01
General circulation models (GCMs) are among the most detailed and sophisticated models of natural phenomena in existence. Still, the lack of robust and efficient subgrid-scale parametrizations for GCMs, along with the inherent sensitivity to initial data and the complex nonlinearities involved, present a major and persistent obstacle to narrowing the range of estimates for end-of-century warming. Estimating future changes in the distribution of climatic extrema is even more difficult. Brute-force tuning the large number of GCM parameters does not appear to help reduce the uncertainties. Andronov and Pontryagin (1937) proposed structural stability as a way to evaluate model robustness. Unfortunately, many real-world systems proved to be structurally unstable. We illustrate these concepts with a very simple model for the El Niño--Southern Oscillation (ENSO). Our model is governed by a differential delay equation with a single delay and periodic (seasonal) forcing. Like many of its more or less detailed and realistic precursors, this model exhibits a Devil's staircase. We study the model's structural stability, describe the mechanisms of the observed instabilities, and connect our findings to ENSO phenomenology. In the model's phase-parameter space, regions of smooth dependence on parameters alternate with rough, fractal ones. We then apply the tools of random dynamical systems and stochastic structural stability to the circle map and a torus map. The effect of noise with compact support on these maps is fairly intuitive: it is the most robust structures in phase-parameter space that survive the smoothing introduced by the noise. The nature of the stochastic forcing matters, thus suggesting that certain types of stochastic parametrizations might be better than others in achieving GCM robustness. This talk represents joint work with M. Chekroun, E. Simonnet and I. Zaliapin.
Chemical reactions induced by oscillating external fields in weak thermal environments
NASA Astrophysics Data System (ADS)
Craven, Galen T.; Bartsch, Thomas; Hernandez, Rigoberto
2015-02-01
Chemical reaction rates must increasingly be determined in systems that evolve under the control of external stimuli. In these systems, when a reactant population is induced to cross an energy barrier through forcing from a temporally varying external field, the transition state that the reaction must pass through during the transformation from reactant to product is no longer a fixed geometric structure, but is instead time-dependent. For a periodically forced model reaction, we develop a recrossing-free dividing surface that is attached to a transition state trajectory [T. Bartsch, R. Hernandez, and T. Uzer, Phys. Rev. Lett. 95, 058301 (2005)]. We have previously shown that for single-mode sinusoidal driving, the stability of the time-varying transition state directly determines the reaction rate [G. T. Craven, T. Bartsch, and R. Hernandez, J. Chem. Phys. 141, 041106 (2014)]. Here, we extend our previous work to the case of multi-mode driving waveforms. Excellent agreement is observed between the rates predicted by stability analysis and rates obtained through numerical calculation of the reactive flux. We also show that the optimal dividing surface and the resulting reaction rate for a reactive system driven by weak thermal noise can be approximated well using the transition state geometry of the underlying deterministic system. This agreement persists as long as the thermal driving strength is less than the order of that of the periodic driving. The power of this result is its simplicity. The surprising accuracy of the time-dependent noise-free geometry for obtaining transition state theory rates in chemical reactions driven by periodic fields reveals the dynamics without requiring the cost of brute-force calculations.
ERIC Educational Resources Information Center
Meier, Deborah
2009-01-01
In this article, the author talks about Ted Sizer and describes him as a "schoolman," a Mr. Chips figure with all the romance that surrounded that image. Accustomed to models of brute power, parents, teachers, bureaucrats, and even politicians were attracted to his message of common decency. There's a way of talking about, and to, school people…
Individual Choice and Unequal Participation in Higher Education
ERIC Educational Resources Information Center
Voigt, Kristin
2007-01-01
Does the unequal participation of non-traditional students in higher education indicate social injustice, even if it can be traced back to individuals' choices? Drawing on luck egalitarian approaches,this article suggests that an answer to this question must take into account the effects of unequal brute luck on educational choices. I use a…
Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.
Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai
2014-07-02
We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006
Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less
What is the force on a magnetic dipole?
NASA Astrophysics Data System (ADS)
Franklin, Jerrold
2018-05-01
This paper will be of interest to physics graduate students and faculty. We show that attempts to modify the force on a magnetic dipole by introducing either hidden momentum or internal forces are not correct. The standard textbook result {F}={{\
Debunking Coriolis Force Myths
ERIC Educational Resources Information Center
Shakur, Asif
2014-01-01
Much has been written and debated about the Coriolis force. Unfortunately, this has done little to demystify the paradoxes surrounding this fictitious force invoked by an observer in a rotating frame of reference. It is the purpose of this article to make another valiant attempt to slay the dragon of the Coriolis force! This will be done without…
Interaction of Rate of Force Development and Duration of Rate in Isometric Force.
ERIC Educational Resources Information Center
Siegel, Donald
A study attempted to determine whether force and duration parameters are programmed in an interactive or independent fashion prior to executing ballistic type isometric contractions of graded intensities. Four adult females each performed 360 trials of producing ballistic type forces representing 25, 40, 55, and 75 percent of their maximal…
Flight Force Measurements on a Spacecraft to Launch Vehicle Interface
NASA Astrophysics Data System (ADS)
Kaufman, Daniel S.; Gordon, Scott A.
2012-07-01
For several years we had wanted to measure interface forces between a launch vehicle and the Payload. Finally in July 2006 a proposal was made and funded to evaluate the use of flight force measurements (FFM) to improve the loads process of a Spacecraft in its design and test cycle. A NASA/Industry team was formed, the core Team consisted of 20 people. The proposal identified two questions that this assessment would attempt to address by obtaining the flight forces. These questions were: 1) Is flight correlation and reconstruction with acceleration methods sufficient? 2) How much can the loads and therefore the design and qualification be reduced by having force measurements? The objective was to predict the six interface driving forces between the Spacecraft and the Launch Vehicle throughout the boost phase. Then these forces would be compared with reconstructed loads analyses for evaluation in an attempt to answer them. The paper will present the development of a strain based force measurement system and also an acceleration method, actual flight results, post flight evaluations and lessons learned.
Suen, Stephen Sik Hung; Khaw, Kim S; Law, Lai Wa; Sahota, Daljit Singh; Lee, Shara Wee Yee; Lau, Tze Kin; Leung, Tak Yeung
2012-06-01
To compare the forces exerted during external cephalic version (ECV) on the maternal abdomen between ( 1 ) the primary attempts performed without spinal analgesia (SA), which failed and ( 2 ) the subsequent reattempts performed under SA. Patients with an uncomplicated singleton breech-presenting pregnancy suitable for ECV were recruited. During ECV, the operator wore a pair of gloves, which had thin piezo-resistive pressure sensors measuring the contact pressure between the operator's hands and maternal abdomen. For patients who had failed ECV, reattempts by the same operator was made with patients under SA, and the applied force was measured in the same manner. The profile of the exerted forces over time during each attempt was analyzed and denoted by pressure-time integral (PTI: mmHg sec). Pain score was also graded by patients using visual analogue scale. Both PTI and pain score before and after the use of SA were then compared. Overall, eight patients who had a failed ECV without SA underwent a reattempt with SA. All of them had successful version and the median PTI of the successful attempts under SA were lower than that of the previous failed attempts performed without SA (127 386 mmHg sec vs. 298,424 mmHg sec; p = 0.017). All of them also reported a 0 pain score, which was significantly lower than that of before (median 7.5; p = 0.016). SA improves the success rate of ECV as well as reduces the force required for successful version.
Experimental Attempts for Deep Insertion in Ultrasonically Forced Insertion Process
NASA Astrophysics Data System (ADS)
Ono, Satoshi; Aoyagi, Manabu; Tamura, Hideki; Takano, Takehiro
2011-07-01
In this paper, we describe two attempts of obtaining deep insertion in an ultrasonically forced insertion (USFI) process. One was to correct the inclination of an inserted rod by passively generated bending vibrations. The inclination causes a partial plastic deformation, which decreases the holding power of processing materials. Two types of horn with grooves for excitation of bending vibrations were examined. The other was to make differences in vibration velocity and the phase of a rod and a metal plate by damping the vibration of a metal plate by using a rubber sheet. As results, the attempts proposed in this study were confirmed to be effective to obtain a deep insertion.
Confronting the Neo-Liberal Brute: Reflections of a Higher Education Middle-Level Manager
ERIC Educational Resources Information Center
Maistry, S. M.
2012-01-01
The higher education scenario in South Africa is fraught with tensions and contradictions. Publicly funded Higher Education Institutions (HEIs) face a particular dilemma. They are expected to fulfill a social mandate which requires a considered response to the needs of the communities in which they are located while simultaneously aspiring for…
28 CFR 552.22 - Principles governing the use of force and application of restraints.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Principles governing the use of force and... JUSTICE INSTITUTIONAL MANAGEMENT CUSTODY Use of Force and Application of Restraints on Inmates § 552.22 Principles governing the use of force and application of restraints. (a) Staff ordinarily shall first attempt...
Nakanishi, Taizo; Shiga, Takashi; Homma, Yosuke; Koyama, Yasuaki; Goto, Tadahiro
2016-05-23
We examined whether the use of Airway Scope (AWS) and C-MAC PM (C-MAC) decreased the force applied on oral structures during intubation attempts as compared with the force applied with the use of Macintosh direct laryngoscope (DL). Prospective cross-over study. A total of 35 novice physicians participated. We used 6 simulation scenarios based on the difficulty of intubation and intubation devices. Our primary outcome measures were the maximum force applied on the maxillary incisors and tongue during intubation attempts, measured by a high-fidelity simulator. The maximum force applied on maxillary incisors was higher with the use of the C-MAC than with the DL and AWS in the normal airway scenario (DL, 26 Newton (N); AWS, 18 N; C-MAC, 52 N; p<0.01) and the difficult airway scenario (DL, 42 N; AWS, 24 N; C-MAC, 68 N; p<0.01). In contrast, the maximum force applied on the tongue was higher with the use of the DL than with the AWS and C-MAC in both airway scenarios (DL, 16 N; AWS, 1 N; C-MAC, 7 N; p<0.01 in the normal airway scenario; DL, 12 N; AWS, 4 N; C-MAC, 7 N; p<0.01 in the difficult airway scenario). The use of C-MAC, compared with the DL and AWS, was associated with the higher maximum force applied on maxillary incisors during intubation attempts. In contrast, the use of video laryngoscopes was associated with the lower force applied on the tongue in both airway scenarios, compared with the DL. Our study was a simulation-based study, and further research on living patients would be warranted. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
NASA Astrophysics Data System (ADS)
Shaw, A. D.; Champneys, A. R.; Friswell, M. I.
2016-08-01
Sudden onset of violent chattering or whirling rotor-stator contact motion in rotational machines can cause significant damage in many industrial applications. It is shown that internal resonance can lead to the onset of bouncing-type partial contact motion away from primary resonances. These partial contact limit cycles can involve any two modes of an arbitrarily high degree-of-freedom system, and can be seen as an extension of a synchronization condition previously reported for a single disc system. The synchronization formula predicts multiple drivespeeds, corresponding to different forms of mode-locked bouncing orbits. These results are backed up by a brute-force bifurcation analysis which reveals numerical existence of the corresponding family of bouncing orbits at supercritical drivespeeds, provided the damping is sufficiently low. The numerics reveal many overlapping families of solutions, which leads to significant multi-stability of the response at given drive speeds. Further, secondary bifurcations can also occur within each family, altering the nature of the response and ultimately leading to chaos. It is illustrated how stiffness and damping of the stator have a large effect on the number and nature of the partial contact solutions, illustrating the extreme sensitivity that would be observed in practice.
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Automatic Generation of Data Types for Classification of Deep Web Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngu, A H; Buttler, D J; Critchlow, T J
2005-02-14
A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less
NASA Astrophysics Data System (ADS)
Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming
2015-12-01
Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.
Large-scale detection of repetitions
Smyth, W. F.
2014-01-01
Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Lary, D. J.; Gencaga, D.; Albayrak, A.; Wei, J.
2013-08-01
Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program.
1967-07-28
This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes
1965-05-01
This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes.
Defect-free atomic array formation using the Hungarian matching algorithm
NASA Astrophysics Data System (ADS)
Lee, Woojun; Kim, Hyosub; Ahn, Jaewook
2017-05-01
Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
Artificial immune system algorithm in VLSI circuit configuration
NASA Astrophysics Data System (ADS)
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.
Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding
NASA Astrophysics Data System (ADS)
Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool
2017-12-01
In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.
Mapping PDB chains to UniProtKB entries.
Martin, Andrew C R
2005-12-01
UniProtKB/SwissProt is the main resource for detailed annotations of protein sequences. This database provides a jumping-off point to many other resources through the links it provides. Among others, these include other primary databases, secondary databases, the Gene Ontology and OMIM. While a large number of links are provided to Protein Data Bank (PDB) files, obtaining a regularly updated mapping between UniProtKB entries and PDB entries at the chain or residue level is not straightforward. In particular, there is no regularly updated resource which allows a UniProtKB/SwissProt entry to be identified for a given residue of a PDB file. We have created a completely automatically maintained database which maps PDB residues to residues in UniProtKB/SwissProt and UniProtKB/trEMBL entries. The protocol uses links from PDB to UniProtKB, from UniProtKB to PDB and a brute-force sequence scan to resolve PDB chains for which no annotated link is available. Finally the sequences from PDB and UniProtKB are aligned to obtain a residue-level mapping. The resource may be queried interactively or downloaded from http://www.bioinf.org.uk/pdbsws/.
Proteinortho: Detection of (Co-)orthologs in large-scale analysis
2011-01-01
Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Thirty years since diffuse sound reflection by maximum length
NASA Astrophysics Data System (ADS)
Cox, Trevor J.; D'Antonio, Peter
2005-09-01
This year celebrates the 30th anniversary of Schroeder's seminal paper on sound scattering from maximum length sequences. This paper, along with Schroeder's subsequent publication on quadratic residue diffusers, broke new ground, because they contained simple recipes for designing diffusers with known acoustic performance. So, what has happened in the intervening years? As with most areas of engineering, the room acoustic diffuser has been greatly influenced by the rise of digital computing technologies. Numerical methods have become much more powerful, and this has enabled predictions of surface scattering to greater accuracy and for larger scale surfaces than previously possible. Architecture has also gone through a revolution where the forms of buildings have become more extreme and sculptural. Acoustic diffuser designs have had to keep pace with this to produce shapes and forms that are desirable to architects. To achieve this, design methodologies have moved away from Schroeder's simple equations to brute force optimization algorithms. This paper will look back at the past development of the modern diffuser, explaining how the principles of diffuser design have been devised and revised over the decades. The paper will also look at the present state-of-the art, and dreams for the future.
Expert system for on-board satellite scheduling and control
NASA Technical Reports Server (NTRS)
Barry, John M.; Sary, Charisse
1988-01-01
An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.
NASA Astrophysics Data System (ADS)
Dittmar, Harro R.; Kusalik, Peter G.
2016-10-01
As shown previously, it is possible to apply configurational and kinetic thermostats simultaneously in order to induce a steady thermal flux in molecular dynamics simulations of many-particle systems. This flux appears to promote motion along potential gradients and can be utilized to enhance the sampling of ordered arrangements, i.e., it can facilitate the formation of a critical nucleus. Here we demonstrate that the same approach can be applied to molecular systems, and report a significant enhancement of the homogeneous crystal nucleation of a carbon dioxide (EPM2 model) system. Quantitative ordering effects and reduction of the particle mobilities were observed in water (TIP4P-2005 model) and carbon dioxide systems. The enhancement of the crystal nucleation of carbon dioxide was achieved with relatively small conjugate thermal fields. The effect is many orders of magnitude bigger at milder supercooling, where the forward flux sampling method was employed, than at a lower temperature that enabled brute force simulations of nucleation events. The behaviour exhibited implies that the effective free energy barrier of nucleation must have been reduced by the conjugate thermal field in line with our interpretation of previous results for atomic systems.
Pancoska, Petr; Moravek, Zdenek; Moll, Ute M
2004-01-01
Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.
Full counting statistics of conductance for disordered systems
NASA Astrophysics Data System (ADS)
Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian
2017-09-01
Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-10
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay
2014-01-01
We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.
NASA Astrophysics Data System (ADS)
Tong, Xiaojun; Cui, Minggen; Wang, Zhu
2009-07-01
The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.
Securing Digital Audio using Complex Quadratic Map
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi
2018-03-01
In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
A Site Density Functional Theory for Water: Application to Solvation of Amino Acid Side Chains.
Liu, Yu; Zhao, Shuangliang; Wu, Jianzhong
2013-04-09
We report a site density functional theory (SDFT) based on the conventional atomistic models of water and the universality ansatz of the bridge functional. The excess Helmholtz energy functional is formulated in terms of a quadratic expansion with respect to the local density deviation from that of a uniform system and a universal functional for all higher-order terms approximated by that of a reference hard-sphere system. With the atomistic pair direct correlation functions of the uniform system calculated from MD simulation and an analytical expression for the bridge functional from the modified fundamental measure theory, the SDFT can be used to predict the structure and thermodynamic properties of water under inhomogeneous conditions with a computational cost negligible in comparison to that of brute-force simulations. The numerical performance of the SDFT has been demonstrated with the predictions of the solvation free energies of 15 molecular analogs of amino acid side chains in water represented by SPC/E, SPC, and TIP3P models. For theTIP3P model, a comparison of the theoretical predictions with MD simulation and experimental data shows agreement within 0.64 and 1.09 kcal/mol on average, respectively.
Development testing of large volume water sprays for warm fog dispersal
NASA Technical Reports Server (NTRS)
Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.
1986-01-01
A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.
Multibeam Gpu Transient Pipeline for the Medicina BEST-2 Array
NASA Astrophysics Data System (ADS)
Magro, A.; Hickish, J.; Adami, K. Z.
2013-09-01
Radio transient discovery using next generation radio telescopes will pose several digital signal processing and data transfer challenges, requiring specialized high-performance backends. Several accelerator technologies are being considered as prototyping platforms, including Graphics Processing Units (GPUs). In this paper we present a real-time pipeline prototype capable of processing multiple beams concurrently, performing Radio Frequency Interference (RFI) rejection through thresholding, correcting for the delay in signal arrival times across the frequency band using brute-force dedispersion, event detection and clustering, and finally candidate filtering, with the capability of persisting data buffers containing interesting signals to disk. This setup was deployed at the BEST-2 SKA pathfinder in Medicina, Italy, where several benchmarks and test observations of astrophysical transients were conducted. These tests show that on the deployed hardware eight 20 MHz beams can be processed simultaneously for 640 Dispersion Measure (DM) values. Furthermore, the clustering and candidate filtering algorithms employed prove to be good candidates for online event detection techniques. The number of beams which can be processed increases proportionally to the number of servers deployed and number of GPUs, making it a viable architecture for current and future radio telescopes.
NASA Astrophysics Data System (ADS)
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
Optimal heavy tail estimation - Part 1: Order selection
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel A.
2017-12-01
The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Strapping in and Bailing out: Navy and Air Force Joint Acquisition of Aircraft
2002-06-01
Secretary of the Air Force and the Secretary of Defense were not pleased with the attempted shenanigans . With the rarely invoked contractor review...December 14, 1990. Tarascio, Colonel John R., Director of Budget & Appropriations Liaison, Assistant Secretary of the Air Force ( Financial Management
Examining the Forces That Guide Teaching Decisions
ERIC Educational Resources Information Center
Griffith, Robin; Massey, Dixie; Atkinson, Terry S.
2013-01-01
This study of two successful first grade teachers examines the forces that guide their instructional decisions. Findings reveal the complexities of forces that influence the moment-to-moment decisions made by these teachers. Teachers repeatedly attempted to balance their desires to be student-centered while addressing state standards and…
Possible Potentials Responsible for Stable Circular Relativistic Orbits
ERIC Educational Resources Information Center
Kumar, Prashant; Bhattacharya, Kaushik
2011-01-01
Bertrand's theorem in classical mechanics of the central force fields attracts us because of its predictive power. It categorically proves that there can only be two types of forces which can produce stable, circular orbits. In this paper an attempt has been made to generalize Bertrand's theorem to the central force problem of relativistic…
Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example
NASA Technical Reports Server (NTRS)
Smart, D. F.; Shea, M. A.
2001-01-01
The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.
Smith, Tiaki Brett; Hébert-Losier, Kim; McClymont, Doug
2018-05-01
The goal of an offensive Rugby Union lineout is to throw the ball in a manner that allows your team to maintain possession. Typically, the player catching the ball jumps and is lifted upwards by two teammates, reaching above the opposing player who is competing for the ball also. Despite various beliefs regarding the importance of the jumper's mass and attempted jump height, and lifters' magnitude and point of force application, there is negligible published data on the topic. The squeeze technique is one lifting method commonly employed by New Zealand teams during lineout plays, whereby the jumper initiates the jump quickly and the lifters provide assistance only once the jumper reaches 20-30 cm. While this strategy may reduce cues to the opposition, it might also constrain the jumper and lifters. We developed a model to explore how changes in the jumper's body mass and attempted jump height, and lifters' magnitude and point of force application influence the time to reach peak catch height. The magnitude of the lift force impacted the time-to-reach peak catch height the most; followed by the jumper's (attempted) jump height and body mass; and lastly, the point of lift force application.
Radar Studies of Aviation Hazards
1994-05-31
RELEASE; DISTRIBUTION UNLIMITED. PHILLIPS LABORATORY . Directorate of Geophysics AIR FORCE MATERIEL COMMAND HANSCOM AIR FORCE BASE, MA 01731-3010...techniques that will be candidates for inclusion in the NEXRAD algorithm inventory. Phenomena of particular interest to the Air Force are being...vast majurity of thunderstorms in central Colorado. Wilson and Mueller (1993) attempted 30-minute nowcasts of thunderstorms, based primarily on Doppler
Black Male Labor Force Participation.
ERIC Educational Resources Information Center
Baer, Roger K.
This study attempts to test (via multiple regression analysis) hypothesized relationships between designated independent variables and age specific incidences of labor force participation for black male subpopulations in 54 Standard Metropolitan Statistical Areas. Leading independent variables tested include net migration, earnings, unemployment,…
[Interaction of mental health and forced married migrants in Germany].
Kizilhan, Jan
2015-11-01
The study examines the interaction of the forced married migrants and the frequency of the psychological illness. Forced-married and not forced-married migrants are compared concerning her psychological illness in psychosomatic clinics in Germany. Forced-married women reported significantly more about psychological illness and have undertaken on average at least four times a suicide attempt. Forced-married women suffer lifelong from this event and need, with taking into account cultural migration-specific aspects, special support in the psychosocial consultation and medical-therapeutic treatment. © Georg Thieme Verlag KG Stuttgart · New York.
Variables Related to Pre-Service Cannabis Use in a Sample of Air Force Enlistees.
ERIC Educational Resources Information Center
Mullins, Cecil J.; And Others
This report is an attempt to add to the existing information about cannabis use, its correlates, and its effects. The sample population consisted of self-admitted abusers of various drugs, identified shortly after entering the Air Force. The subjects (N=4688) were located through the Drug Control Office at Lackland Air Force Base. Variables…
Infants' Use of Force to Defend Toys: The Origins of Instrumental Aggression
ERIC Educational Resources Information Center
Hay, Dale F.; Hurst, Sarah-Louise; Waters, Cerith S.; Chadwick, Andrea
2011-01-01
The two aims of the study were (a) to determine when infants begin to use force intentionally to defend objects to which they might have a claim and (b) to examine the relationship between toddlers' instrumental use of force and their tendencies to make possession claims. Infants' and toddlers' reactions to peers' attempts to take their toys were…
2011-12-01
sides attempted to deliver explosive-laden unmanned balloons to the enemy. The Japanese revived this technique during World War II, when Japanese forces...attempted to send similar balloons across the Atlantic to cause destruction in the United States. 3 As aircraft technology developed, so did the...taken hostage following a failed hijacking attempt. The objective was to free the American captive and it was a success. 55 2005-2011, Pakistan
The United States Air Force in Korea: A Chronology, 1950-1953
2000-01-01
War , the U.S. Air Force (USAF) Historian commissioned the Research Division, Air Force His- torical Research Agency (AFHRA), Maxwell Air Force Base...and aces. Finally, it attempts to summarize those USAF events in Korea that best illustrate the air war and the application of air power in the...sources, usually to confirm the most signifi- cant events of the air war in Korea. AFHRA historians or archivists who researched and wrote the monthly and
A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere
NASA Astrophysics Data System (ADS)
Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael
2017-05-01
We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.
The evolution of parental care in insects: A test of current hypotheses
Gilbert, James D J; Manica, Andrea
2015-01-01
Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced—for example, by caring for multiple broods—but rejecting the “enhanced fecundity” hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. PMID:25825047
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
In search of robust flood risk management alternatives for the Netherlands
NASA Astrophysics Data System (ADS)
Klijn, F.; Knoop, J. M.; Ligtvoet, W.; Mens, M. J. P.
2012-05-01
The Netherlands' policy for flood risk management is being revised in view of a sustainable development against a background of climate change, sea level rise and increasing socio-economic vulnerability to floods. This calls for a thorough policy analysis, which can only be adequate when there is agreement about the "framing" of the problem and about the strategic alternatives that should be taken into account. In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood risks, and defined a number of different strategic alternatives for flood risk management at the national level. These alternatives, ranging from flood protection by brute force to reduction of the vulnerability by spatial planning only, were compared with continuation of the current policy on a number of criteria, comprising costs, the reduction of fatality risk and economic risk, and their robustness in relation to uncertainties. We found that a change of policy away from conventional embankments towards gaining control over the flooding process by making the embankments unbreachable is attractive. By thus influencing exposure to flooding, the fatality risk can be effectively reduced at even lower net societal costs than by continuation of the present policy or by raising the protection standards where cost-effective.
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
Challenges in the development of very high resolution Earth System Models for climate science
NASA Astrophysics Data System (ADS)
Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun
2017-04-01
The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.
High Performance Analytics with the R3-Cache
NASA Astrophysics Data System (ADS)
Eavis, Todd; Sayeed, Ruhan
Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
Edge Modeling by Two Blur Parameters in Varying Contrasts.
Seo, Suyoung
2018-06-01
This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
NASA Astrophysics Data System (ADS)
Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars
2017-04-01
We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.
Axicons, prisms and integrators: searching for simple laser beam shaping solutions
NASA Astrophysics Data System (ADS)
Lizotte, Todd
2010-08-01
Over the last thirty five years there have been many papers presented at numerous conferences and published within a host of optical journals. What is presented in many cases is either too exotic or technically challenging in practical application terms and it could be said both are testaments to the imagination of engineers and researchers. For many brute force laser processing applications such as paint stripping, large area ablation or general skiving of flex circuits, the opportunity to use a beam shaper that is inexpensive is a welcomed tool. Shaping the laser beam for less demanding applications, provides for a more uniform removal rate and increases the overall quality of the part being processed. It is a well known fact customers like their parts to look good. Many times, complex optical beam shaping techniques are considered because no one is aware of the historical solutions that have been lost to the ages. These complex solutions can range in price from 10,000 to 60,000 and require many months to design and fabricate. This paper will provide an overview of various beam shaping techniques that are both elegant and simple in concept and design. Optical techniques using axicons, prisms and reflective integrators will be discussed in an overview format.
Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F
2012-09-14
We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
NASA Astrophysics Data System (ADS)
Gad-El-Hak, M.
1996-11-01
Considering the extreme complexity of the turbulence problem in general and the unattainability of first-principles analytical solutions in particular, it is not surprising that controlling a turbulent flow remains a challenging task, mired in empiricism and unfulfilled promises and aspirations. Brute force suppression, or taming, of turbulence via active control strategies is always possible, but the penalty for doing so often exceeds any potential savings. The artifice is to achieve a desired effect with minimum energy expenditure. Spurred by the recent developments in chaos control, microfabrication and neural networks, efficient reactive control of turbulent flows, where the control input is optimally adjusted based on feedforward or feedback measurements, is now in the realm of the possible for future practical devices. But regardless of how the problem is approached, combating turbulence is always as arduous as the taming of the shrew. The former task will be emphasized during the oral presentation, but for this abstract we reflect on a short verse from the latter. From William Shakespeare's The Taming of the Shrew. Curtis (Petruchio's servant, in charge of his country house): Is she so hot a shrew as she's reported? Grumio (Petruchio's personal lackey): She was, good Curtis, before this frost. But thou know'st winter tames man, woman, and beast; for it hath tamed my old master, and my new mistress, and myself, fellow Curtis.
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Suicides and Suicide Attempts in the U.S. Military, 2008-2010
ERIC Educational Resources Information Center
Bush, Nigel E.; Reger, Mark A.; Luxton, David D.; Skopp, Nancy A.; Kinn, Julie; Smolenski, Derek; Gahm, Gregory A.
2013-01-01
The Department of Defense Suicide Event Report Program collects extensive information on suicides and suicide attempts from the U.S. Air Force, Army, Marine Corps, and Navy. Data are compiled on demographics, suicide event details, behavioral health treatment history, military history, and information about other potential risk factors such as…
[Attempted and completed homicide in Hamburg--a comparison of two six-year periods].
Herrmann, Julia; Gehl, Axel; Püschel, Klaus; Anders, Sven
2010-01-01
The present study compared cases of attempted and completed homicide in Hamburg from 1984 to 1989 and from 1995 to 2000 (n = 887). Data collection was performed using the police records. Attempted homicide showed a significant increase (34.8% vs. 57.9%, P < 0.0001). The majority of the victims and offenders were male with the share of male victims increasing from 59.7% to 74.2% (P < 0.0001). The age of the victims and offenders ranged between 22 and 40 years in both periods. The share of persons with a nationality other than German increased both in the victims (23.1% vs. 37.2%, P < 0.0001) and in the offenders (26.8% vs. 37.2%, P < 0.0001). The most common motives were interpersonal conflicts and robbery. The most frequently used forms of violence were sharp force, blunt force and strangulation.
Political Correctness and American Academe.
ERIC Educational Resources Information Center
Drucker, Peter F.
1994-01-01
Argues that today's political correctness atmosphere is a throwback to attempts made by the Nazis and Stalinists to force society into conformity. Academia, it is claimed, is being forced to conform to gain control of the institution of higher education. It is predicted that this effort will fail. (GR)
Charting a New Path: Modernizing the U.S. Air Force Fighter Pilots Career Development
2015-12-01
truly provides no new incentives for undecided fighter pilots and is proving to be an antiquated attempt to maintain the fighter pilot force...growing technological requirements. Weapons shops are responsible for a growing number of responsibilities to support combat operations. Crypto
Scalable and Accurate SMT-based Model Checking of Data Flow Systems
2013-10-30
guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have
Operational Risk Preparedness: General George H. Thomas and the Franklin-Nashville Campaign
2014-05-22
monograph analyzes and compares thoughts on risk from multiple disciplines and viewpoints to develop a suitable definition and corresponding principles...sounds similar to Sun Tzu: " from the enemy’s character, from his institutions, the state of his affairs and his general situation, each side, using...changes through brute strength, but do not gain from change, they merely continue to exist. He therefore introduced the term antifragile—a system that
Fast equilibration protocol for million atom systems of highly entangled linear polyethylene chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sliozberg, Yelena R.; TKC Global, Inc., Aberdeen Proving Ground, Maryland 21005; Kröger, Martin
Equilibrated systems of entangled polymer melts cannot be produced using direct brute force equilibration due to the slow reptation dynamics exhibited by high molecular weight chains. Instead, these dense systems are produced using computational techniques such as Monte Carlo-Molecular Dynamics hybrid algorithms, though the use of soft potentials has also shown promise mainly for coarse-grained polymeric systems. Through the use of soft-potentials, the melt can be equilibrated via molecular dynamics at intermediate and long length scales prior to switching to a Lennard-Jones potential. We will outline two different equilibration protocols, which use various degrees of information to produce the startingmore » configurations. In one protocol, we use only the equilibrium bond angle, bond length, and target density during the construction of the simulation cell, where the information is obtained from available experimental data and extracted from the force field without performing any prior simulation. In the second protocol, we moreover utilize the equilibrium radial distribution function and dihedral angle distribution. This information can be obtained from experimental data or from a simulation of short unentangled chains. Both methods can be used to prepare equilibrated and highly entangled systems, but the second protocol is much more computationally efficient. These systems can be strictly monodisperse or optionally polydisperse depending on the starting chain distribution. Our protocols, which utilize a soft-core harmonic potential, will be applied for the first time to equilibrate a million particle system of polyethylene chains consisting of 1000 united atoms at various temperatures. Calculations of structural and entanglement properties demonstrate that this method can be used as an alternative towards the generation of entangled equilibrium structures.« less
Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules
NASA Astrophysics Data System (ADS)
Grazioli, Gianmarc
Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.
The Big Bang, Superstring Theory and the origin of life on the Earth.
Trevors, J T
2006-03-01
This article examines the origin of life on Earth and its connection to the Superstring Theory, that attempts to explain all phenomena in the universe (Theory of Everything) and unify the four known forces and relativity and quantum theory. The four forces of gravity, electro-magnetism, strong and weak nuclear were all present and necessary for the origin of life on the Earth. It was the separation of the unified force into four singular forces that allowed the origin of life.
Displaced Pride: Attacking Cynicism at the United States Air Force Academy
2009-04-01
or civilian life. I for one am very cynical when it comes to our political leadership...their attempts to solve the current financial crisis...cadet shenanigans might doom your Air Force career with UCMJ action. Fear was the only motivator, unless you held on to your own intrinsic
SIMULATION OF INTRINSIC BIOREMEDIATION PROCESSES AT WURTSMITH AIR FORCE BASE, MICHIGAN
In October, 1988, a KC-135 aircraft crashed at Wurtsmith Air Force base (AFB), Oscoda, Michigan during an attempted landing. Approximately 3000 gallons of jet fuel (JP-4) were spilled onto the ground, with a large portion of the fuel entering the subsurface. Previous investigat...
NASA Astrophysics Data System (ADS)
Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.
2013-12-01
Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.
Quit attempts in response to smoke-free legislation in England.
Hackshaw, Lucy; McEwen, Andy; West, Robert; Bauld, Linda
2010-04-01
To determine whether England's smoke-free legislation, introduced on 1 July 2007, influenced intentions and attempts to stop smoking. National household surveys conducted in England between January 2007 and December 2008. The sample was weighted to match census data on demographics and included 10 560 adults aged 16 or over who reported having smoked within the past year. A greater percentage of smokers reported making a quit attempt in July and August 2007 (8.6%, n=82) compared with July and August 2008 (5.7%, n=48) (Fisher's exact=0.022); there was no significant difference in the number of quit attempts made at other times in 2007 compared with 2008. In the 5 months following the introduction of the legislation 19% (n=75) of smokers making a quit attempt reported that they had done so in response to the legislation. There were no significant differences in these quit attempts with regard to gender, social grade or cigarette consumption; there was however a significant linear trend with increasing age (chi(2)=7.755, df=1, p<0.005). The prevalence of respondents planning to quit before the ban came into force decreased over time, while those who planned to quit when the ban came into force increased as the ban drew closer. England's smoke-free legislation was associated with a significant temporary increase in the percentage of smokers attempting to stop, equivalent to over 300 000 additional smokers trying to quit. As a prompt to quitting the ban appears to have been equally effective across all social grades.
Forced guidance and distribution of practice in sequential information processing.
NASA Technical Reports Server (NTRS)
Decker, L. R.; Rogers, C. A., Jr.
1973-01-01
Distribution of practice and forced guidance were used in a sequential information-processing task in an attempt to increase the capacity of human information-processing mechanisms. A reaction time index of the psychological refractory period was used as the response measure. Massing of practice lengthened response times while forced guidance shortened them. Interpretation was in terms of load reduction upon the response-selection stage of the information-processing system.-
The strategic function of attempted suicide.
Katschnig, H; Steinert, H
1975-01-01
Attempting suicide is regarded as a strategy for getting out of emotionally troublesome situations. This strategy is a bodily and risky 'cry for help', but also a cry for help with almost certain success as the bodily self-damage forces significant others to show indulgent behaviour. As this indulgent behaviour has the actual function to relieve significant others from feelings of guilt and from real social pressures, it very often diminishes with time, so that the effect of the 'attempted suicide strategy' proves to be very short. The relation between this concept and some epidemiological findings is discussed and the consequences of this approach for the management of attempted suicides are pointed out.
From brute luck to option luck? On genetics, justice, and moral responsibility in reproduction.
Denier, Yvonne
2010-04-01
The structure of our ethical experience depends, crucially, on a fundamental distinction between what we are responsible for doing or deciding and what is given to us. As such, the boundary between chance and choice is the spine of our conventional morality, and any serious shift in that boundary is thoroughly dislocating. Against this background, I analyze the way in which techniques of prenatal genetic diagnosis (PGD) pose such a fundamental challenge to our conventional ideas of justice and moral responsibility. After a short description of the situation, I first examine the influential luck egalitarian theory of justice, which is based on the distinction between choice and luck or, more specifically, between option luck and brute luck, and the way in which it would approach PGD (section II), followed by an analysis of the conceptual incoherencies (in section III) and moral problems (in section IV) that come with such an approach. Put shortly, the case of PGD shows that the luck egalitarian approach fails to express equal respect for the individual choices of people. The paradox of the matter is that by overemphasizing the fact of choice as such, without regard for the social framework in which they are being made, or for the fundamental and existential nature of particular choices-like choosing to have children and not to undergo PGD or not to abort a handicapped fetus-such choices actually become impossible.
2014-05-02
Interagency Coordination Centers (JIACs), Interagency Task Forces ( IATFs ) are found within GCCs and subordinate military units in an attempt to bridge...Interagency Tasks Forces ( IATFs ) that exist at each Geographic Combatant Command (GCC). Rather, this chapter serves to highlight the Civil Military
27 CFR 478.99 - Certain prohibited sales or deliveries.
Code of Federal Regulations, 2010 CFR
2010-04-01
...; (6) Has been discharged from the Armed Forces under dishonorable conditions; (7) Who, having been a... explicitly prohibits the use, attempted use, or threatened use of physical force against such intimate... such ammunition. [T.D. ATF-270, 53 FR 10497, Mar. 31, 1988, as amended by T.D. ATF-363, 60 FR 17454...
The Relationship of Curriculum to School District Organization.
ERIC Educational Resources Information Center
Turner, Harold E.
This paper attempts to place in perspective the necessary relationships between school district organization and the curriculum in today's society. In the first part of the presentation, the author describes some of the forces currently affecting the curriculum and those forces likely to continue making an impact in the future. Future needs…
Over 600 Cottonwood trees were planted over a shallow groundwater plume in an attempt to detoxify the tricWoroethylene (TCE) in a groundwater plume at a former Air Force facility. Two planting techniques were used: rooted stock about two years old, and 18 inch cuttings were insta...
Over 600 Cottonwood trees were planted over a shallow groundwater plume in an attempt to detoxify the trichloroethylene (TCE) in a groundwater plume at a former Air Force facility. Two planting techniques were used: rooted stock about two years old, and 18 inch cuttings were inst...
Cellular Basis of Mechanotransduction
NASA Technical Reports Server (NTRS)
Ingber, Donald E.
1996-01-01
Physical forces, such as those due to gravity are fundamental regulators of tissue development. To influence morphogenesis, mechanical forces must alter growth and function. Yet little is known about how cells convert mechanical signals into a chemical response. This presentation attempts to place the potential molecular mediators of mechanotransduction within the context of the structural complexity of living cells.
The Force-Frequency Relationship: Insights from Mathematical Modeling
ERIC Educational Resources Information Center
Puglisi, Jose L.; Negroni, Jorge A.; Chen-Izu, Ye; Bers, Donald M.
2013-01-01
The force-frequency relationship has intrigued researchers since its discovery by Bowditch in 1871. Many attempts have been made to construct mathematical descriptions of this phenomenon, beginning with the simple formulation of Koch-Wesser and Blinks in 1963 to the most sophisticated ones of today. This property of cardiac muscle is amplified by…
ERIC Educational Resources Information Center
Allingham, John D.; Spencer, Byron G.
To followup an earlier study of the relative importance of age, education, and marital status as variables influencing female participation in the labor force, this research attempts to measure the relative importance of similar factors in determining whether or not a woman works or wishes to work. Particular emphasis was given to such…
The United States Space Force: Not If, But When
2016-06-01
context behind the genesis of the United States Air Force in an attempt to understand what contextual factors must be present in order for the nation to...the contextual elements that surrounded the creation of the Air Force, the paper is able to extrapolate what the necessary and sufficient conditions...military. The rise of the USAF from the US Army Air Corps not only resulted from technological developments, but also from many contextual
Suicidal Behavior Outcomes of Childhood Sexual Abuse: Longitudinal Study of Adjudicated Girls
Rabinovitch, Sara M.; Kerr, David C. R.; Leve, Leslie D.; Chamberlain, Patricia
2014-01-01
Childhood sexual abuse (CSA) histories are prevalent among adolescent girls in the juvenile justice system (JJS) and may contribute to their high rates of suicidal behavior. Among 166 JJS girls who participated in an intervention trial, baseline CSA and covariates were examined as predictors of suicide attempt and non-suicidal self-injury (NSSI) reported at long-term follow-up (7–12 years later). Early forced CSA was related to lifetime suicide attempt and NSSI history, and (marginally) to post-baseline attempt; effects were not mediated by anxiety or depressive symptoms. Findings suggest that earlier victimization and younger entry into JJS are linked with girls’ suicide attempt and NSSI. PMID:25370436
The Air Force Role in Low-Intensity Conflict
1986-10-01
attempt, Kabbej was chief pilot of Royal Air Maroc , Morocco’s national airline . He was more than a pilot who happened to gain the trust of the King...Royal Air Maroc . But he kept a reserve commission in the air force. Thus, because of his active duty and reserve service, Kabbej is not regarded in
Friction. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].
ERIC Educational Resources Information Center
2000
Most people think friction is what happens when two things are rubbed together. But there's so much more! Friction is a force that resists motion. Yet, without friction, motion would be impossible. Students will learn more about this natural force and the attempts made at controlling it in the world. Includes a hands-on activity and graphic…
Knee joint forces: prediction, measurement, and significance
D’Lima, Darryl D.; Fregly, Benjamin J.; Patil, Shantanu; Steklov, Nikolai; Colwell, Clifford W.
2011-01-01
Knee forces are highly significant in osteoarthritis and in the survival and function of knee arthroplasty. A large number of studies have attempted to estimate forces around the knee during various activities. Several approaches have been used to relate knee kinematics and external forces to internal joint contact forces, the most popular being inverse dynamics, forward dynamics, and static body analyses. Knee forces have also been measured in vivo after knee arthroplasty, which serves as valuable validation of computational predictions. This review summarizes the results of published studies that measured knee forces for various activities. The efficacy of various methods to alter knee force distribution, such as gait modification, orthotics, walking aids, and custom treadmills are analyzed. Current gaps in our knowledge are identified and directions for future research in this area are outlined. PMID:22468461
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.
2006-03-01
This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.
Recognizing human actions by learning and matching shape-motion prototype trees.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2012-03-01
A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.
Time Series Discord Detection in Medical Data using a Parallel Relational Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.
Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David
2008-07-01
Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2015-04-15
By means of molecular dynamics (MD) simulations, we demonstrate that porous graphene can efficiently separate gases according to their molecular sizes. The flux sequence from the classical MD simulation is H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4}, which generally follows the trend in the kinetic diameters. This trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO{sub 2}/N{sub 2} mixtures further demonstrate themore » separation capability of nanoporous graphene. - Graphical abstract: Classical molecular dynamics simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene, in excellent agreement with a recent experiment. - Highlights: • Classical MD simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene. • Free energy calculations yield permeation barriers for those gases. • Selectivities for several gas pairs are estimated from the free-energy barriers and the kinetic theory of gases. • The selectivity trend is in excellent agreement with a recent experiment.« less
A Survey of Image Encryption Algorithms
NASA Astrophysics Data System (ADS)
Kumari, Manju; Gupta, Shailender; Sardana, Pranshul
2017-12-01
Security of data/images is one of the crucial aspects in the gigantic and still expanding domain of digital transfer. Encryption of images is one of the well known mechanisms to preserve confidentiality of images over a reliable unrestricted public media. This medium is vulnerable to attacks and hence efficient encryption algorithms are necessity for secure data transfer. Various techniques have been proposed in literature till date, each have an edge over the other, to catch-up to the ever growing need of security. This paper is an effort to compare the most popular techniques available on the basis of various performance metrics like differential, statistical and quantitative attacks analysis. To measure the efficacy, all the modern and grown-up techniques are implemented in MATLAB-2015. The results show that the chaotic schemes used in the study provide highly scrambled encrypted images having uniform histogram distribution. In addition, the encrypted images provided very less degree of correlation coefficient values in horizontal, vertical and diagonal directions, proving their resistance against statistical attacks. In addition, these schemes are able to resist differential attacks as these showed a high sensitivity for the initial conditions, i.e. pixel and key values. Finally, the schemes provide a large key spacing, hence can resist the brute force attacks, and provided a very less computational time for image encryption/decryption in comparison to other schemes available in literature.
1965-04-16
This photograph depicts a dramatic view of the first test firing of all five F-1 engines for the Saturn V S-IC stage at the Marshall Space Flight Center. The testing lasted a full duration of 6.5 seconds. It also marked the first test performed in the new S-IC static test stand and the first test using the new control blockhouse. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. Required to hold down the brute force of a 7,500,000-pound thrust, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900 ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minute.
Heterozygote PCR product melting curve prediction.
Dwight, Zachary L; Palais, Robert; Kent, Jana; Wittwer, Carl T
2014-03-01
Melting curve prediction of PCR products is limited to perfectly complementary strands. Multiple domains are calculated by recursive nearest neighbor thermodynamics. However, the melting curve of an amplicon containing a heterozygous single-nucleotide variant (SNV) after PCR is the composite of four duplexes: two matched homoduplexes and two mismatched heteroduplexes. To better predict the shape of composite heterozygote melting curves, 52 experimental curves were compared with brute force in silico predictions varying two parameters simultaneously: the relative contribution of heteroduplex products and an ionic scaling factor for mismatched tetrads. Heteroduplex products contributed 25.7 ± 6.7% to the composite melting curve, varying from 23%-28% for different SNV classes. The effect of ions on mismatch tetrads scaled to 76%-96% of normal (depending on SNV class) and averaged 88 ± 16.4%. Based on uMelt (www.dna.utah.edu/umelt/umelt.html) with an expanded nearest neighbor thermodynamic set that includes mismatched base pairs, uMelt HETS calculates helicity as a function of temperature for homoduplex and heteroduplex products, as well as the composite curve expected from heterozygotes. It is an interactive Web tool for efficient genotyping design, heterozygote melting curve prediction, and quality control of melting curve experiments. The application was developed in Actionscript and can be found online at http://www.dna.utah.edu/hets/. © 2013 WILEY PERIODICALS, INC.
Comparison of two laryngeal tissue fiber constitutive models
NASA Astrophysics Data System (ADS)
Hunter, Eric J.; Palaparthi, Anil Kumar Reddy; Siegmund, Thomas; Chan, Roger W.
2014-02-01
Biological tissues are complex time-dependent materials, and the best choice of the appropriate time-dependent constitutive description is not evident. This report reviews two constitutive models (a modified Kelvin model and a two-network Ogden-Boyce model) in the characterization of the passive stress-strain properties of laryngeal tissue under tensile deformation. The two models are compared, as are the automated methods for parameterization of tissue stress-strain data (a brute force vs. a common optimization method). Sensitivity (error curves) of parameters from both models and the optimized parameter set are calculated and contrast by optimizing to the same tissue stress-strain data. Both models adequately characterized empirical stress-strain datasets and could be used to recreate a good likeness of the data. Nevertheless, parameters in both models were sensitive to measurement errors or uncertainties in stress-strain, which would greatly hinder the confidence in those parameters. The modified Kelvin model emerges as a potential better choice for phonation models which use a tissue model as one component, or for general comparisons of the mechanical properties of one type of tissue to another (e.g., axial stress nonlinearity). In contrast, the Ogden-Boyce model would be more appropriate to provide a basic understanding of the tissue's mechanical response with better insights into the tissue's physical characteristics in terms of standard engineering metrics such as shear modulus and viscosity.
Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.
NASA Astrophysics Data System (ADS)
Hawary, A. F.; Razak, N. A.
2018-05-01
Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.
The evolution of parental care in insects: A test of current hypotheses.
Gilbert, James D J; Manica, Andrea
2015-05-01
Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced-for example, by caring for multiple broods-but rejecting the "enhanced fecundity" hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. © 2015 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Evaluation of CMAQ and CAMx Ensemble Air Quality Forecasts during the 2015 MAPS-Seoul Field Campaign
NASA Astrophysics Data System (ADS)
Kim, E.; Kim, S.; Bae, C.; Kim, H. C.; Kim, B. U.
2015-12-01
The performance of Air quality forecasts during the 2015 MAPS-Seoul Field Campaign was evaluated. An forecast system has been operated to support the campaign's daily aircraft route decisions for airborne measurements to observe long-range transporting plume. We utilized two real-time ensemble systems based on the Weather Research and Forecasting (WRF)-Sparse Matrix Operator Kernel Emissions (SMOKE)-Comprehensive Air quality Model with extensions (CAMx) modeling framework and WRF-SMOKE- Community Multi_scale Air Quality (CMAQ) framework over northeastern Asia to simulate PM10 concentrations. Global Forecast System (GFS) from National Centers for Environmental Prediction (NCEP) was used to provide meteorological inputs for the forecasts. For an additional set of retrospective simulations, ERA Interim Reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) was also utilized to access forecast uncertainties from the meteorological data used. Model Inter-Comparison Study for Asia (MICS-Asia) and National Institute of Environment Research (NIER) Clean Air Policy Support System (CAPSS) emission inventories are used for foreign and domestic emissions, respectively. In the study, we evaluate the CMAQ and CAMx model performance during the campaign by comparing the results to the airborne and surface measurements. Contributions of foreign and domestic emissions are estimated using a brute force method. Analyses on model performance and emissions will be utilized to improve air quality forecasts for the upcoming KORUS-AQ field campaign planned in 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; John, Dwayne O; Koju, Vijay
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less
Strategies for resolving conflict: their functional and dysfunctional sides.
Stimac, M
1982-01-01
Conflict in the workplace can have a beneficial effect. That is if appropriately resolved, it plays an important part in effective problem solving, according to author Michele Stimac, associate dean, curriculum and instruction, and professor at Pepperdine University Graduate School of Education and Psychology. She advocates confrontation--by way of negotiation rather than brute force--as the best way to resolve conflict, heal wounds, reconcile the parties involved, and give the resolution long life. But she adds that if a person who has though through when, where, and how to confront someone foresees only disaster, avoidance is the best path to take. The emphasis here is on strategy. Avoiding confrontation, for example, is not a strategic move unless it is backed by considered judgment. Stimac lays out these basic tenets for engaging in sound negotiation: (1) The confrontation should take place in neutral territory. (2) The parties should actively listen to each other. (3) Each should assert his or her right to fair treatment. (4) Each must allow the other to retain his or her dignity. (5) The parties should seek a consensus on the issues inconflict, their resolution, and the means of reducing any tension that results from the resolution. (6) The parties should exhibit a spirit of give and take--that is, of compromise. (7) They should seek satisfaction for all involved.
Testing the mutual information expansion of entropy with multivariate Gaussian distributions.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2017-12-14
The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.
Kaija, A R; Wilmer, C E
2017-09-08
Designing better porous materials for gas storage or separations applications frequently leverages known structure-property relationships. Reliable structure-property relationships, however, only reveal themselves when adsorption data on many porous materials are aggregated and compared. Gathering enough data experimentally is prohibitively time consuming, and even approaches based on large-scale computer simulations face challenges. Brute force computational screening approaches that do not efficiently sample the space of porous materials may be ineffective when the number of possible materials is too large. Here we describe a general and efficient computational method for mapping structure-property spaces of porous materials that can be useful for adsorption related applications. We describe an algorithm that generates random porous "pseudomaterials", for which we calculate structural characteristics (e.g., surface area, pore size and void fraction) and also gas adsorption properties via molecular simulations. Here we chose to focus on void fraction and Xe adsorption at 1 bar, 5 bar, and 10 bar. The algorithm then identifies pseudomaterials with rare combinations of void fraction and Xe adsorption and mutates them to generate new pseudomaterials, thereby selectively adding data only to those parts of the structure-property map that are the least explored. Use of this method can help guide the design of new porous materials for gas storage and separations applications in the future.
Detecting rare, abnormally large grains by x-ray diffraction
Boyce, Brad L.; Furnish, Timothy Allen; Padilla, H. A.; ...
2015-07-16
Bimodal grain structures are common in many alloys, arising from a number of different causes including incomplete recrystallization and abnormal grain growth. These bimodal grain structures have important technological implications, such as the well-known Goss texture which is now a cornerstone for electrical steels. Yet our ability to detect bimodal grain distributions is largely confined to brute force cross-sectional metallography. The present study presents a new method for rapid detection of unusually large grains embedded in a sea of much finer grains. Traditional X-ray diffraction-based grain size measurement techniques such as Scherrer, Williamson–Hall, or Warren–Averbach rely on peak breadth andmore » shape to extract information regarding the average crystallite size. However, these line broadening techniques are not well suited to identify a very small fraction of abnormally large grains. The present method utilizes statistically anomalous intensity spikes in the Bragg peak to identify regions where abnormally large grains are contributing to diffraction. This needle-in-a-haystack technique is demonstrated on a nanocrystalline Ni–Fe alloy which has undergone fatigue-induced abnormal grain growth. In this demonstration, the technique readily identifies a few large grains that occupy <0.00001 % of the interrogation volume. Finally, while the technique is demonstrated in the current study on nanocrystalline metal, it would likely apply to any bimodal polycrystal including ultrafine grained and fine microcrystalline materials with sufficiently distinct bimodal grain statistics.« less
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
Sufi, Fahim; Khalil, Ibrahim
2009-04-01
With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.
RCNP Project on Polarized {sup 3}He Ion Sources - From Optical Pumping to Cryogenic Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, M.; Inomata, T.; Takahashi, Y.
2009-08-04
A polarized {sup 3}He ion source has been developed at RCNP for intermediate and high energy spin physics. Though we started with an OPPIS (Optical Pumping Polarized Ion Source), it could not provide highly polarized {sup 3}He beam because of fundamental difficulties. Subsequently to this unhappy result, we examined novel types of the polarized {sup 3}He ion source, i.e., EPPIS (Electron Pumping Polarized Ion Source), and ECRPIS (ECR Polarized Ion Source) experimentally or theoretically, respectively. However, attainable {sup 3}He polarization degrees and beam intensities were still insufficient for practical use. A few years later, we proposed a new idea formore » the polarized {sup 3}He ion source, SEPIS (Spin Exchange Polarized Ion Source) which is based on enhanced spin-exchange cross sections at low incident energies for {sup 3}He{sup +}+Rb, and its feasibility was experimentally examined.Recently, we started a project on polarized {sup 3}He gas generated by the brute force method with low temperature (approx4 mK) and strong magnetic field (approx17 T), and rapid melting of highly polarized solid {sup 3}He followed by gasification. When this project will be successful, highly polarized {sup 3}He gas will hopefully be used for a new type of the polarized {sup 3}He ion source.« less
Top-down constraints of regional emissions for KORUS-AQ 2016 field campaign
NASA Astrophysics Data System (ADS)
Bae, M.; Yoo, C.; Kim, H. C.; Kim, B. U.; Kim, S.
2017-12-01
Accurate estimations of emission rates form local and international sources are essential in regional air quality simulations, especially in assessing the relative contributions from international emission sources. While bottom-up constructions of emission inventories provide detailed information on specific emission types, they are limited to cover regions with rapid change of anthropogenic emissions (e.g. China) or regions without enough socioeconomic information (e.g. North Korea). We utilized space-borne monitoring of major pollutant precursors to construct a realistic emission inputs for chemistry transport models during the KORUS-AQ 2016 field campaign. Base simulation was conducted using WRF, SMOKE, and CMAQ modeling frame using CREATE 2015 (Asian countries) and CAPSS 2013 (South Korea) emissions inventories. NOx, SO2 and VOC model emissions are adjusted using the column density comparisons ratios (between modeled and observed NO2, SO2 and HCHO column densities) and emission-to-density conversion ratio (from model). Brute force perturbation method was used to separate contributions from North Korea, China and South Korea for flight pathways during the field campaign. Backward-Tracking Model Analyzer (BMA), based on NOAA HYSPLIT trajectory and dispersion model, are also utilized to track histories of chemical processes and emission source apportionment. CMAQ simulations were conducted over East Asia (27-km) and over South and North Korea (9-km) during KORUS-AQ campaign (1st May to 10th June 2016).
Contribution of regional-scale fire events to ozone and PM2.5 ...
Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas
2011-12-01
Vass 2007; Australian Defence Force 2011). The mining industry is considered a direct competitor, as it has experienced rapid employment growth for...military must compete with the transportation, mining , engineering, construction and health sectors (Defence Force Recruiting 2010b). The Navy met many...national team, the Opals . The sponsorship was timed to promote the launching of the defencejobs website. This effort attempted to project the ADF brand
Anti-gravity and galaxy rotation curves
NASA Astrophysics Data System (ADS)
Sanders, R. H.
1984-07-01
A modification of Newtonian gravitational attraction which arises in the context of modern attempts to unify gravity with the other forces in nature can produce rotation curves for spiral galaxies which are nearly flat from 10 to 100 kpc, bind clusters of galaxies, and close the universe with the density of baryonic matter consistent with primordial nucleosynthesis. This is possible if a very low mass vector boson carries an effective anti-gravity force which on scales smaller than that of galaxies almost balances the normal attractive gravity force.
1980-10-01
Element, 64709N Prototype Manpower/Personnel Systems (U), Project Z1302-PN Officer Career Models (U), funded by the Office of the Deputy Assistant... Models for Navy Officer Billets portion of the proposed NPS research effort to develop an integrated officer system planning model ; the purpose of this...attempting to model the Naval officer force structure as a system. This study considers the primary first order factors which drive the requirements
General Roy S. Geiger, USMC: Marine Aviator, Joint Force Commander
2007-06-01
in a destroyed N-9 trainer.50 Geiger was attempting to land on Pensacola Bay , when a submarine surfaced at his intended point of landing... Corsairs , Hawks, Falcons, and SeaHawks, which replaced the WW I-vintage DH-4’s, out of which the Marine Corps had wrung every ounce of utility. Brown...CACTUS Air Force possessed a force of over 200 aircraft, including the formidable F4U Corsair , and the Japanese military had adopted a defensive
Force balance on two-dimensional superconductors with a single moving vortex
NASA Astrophysics Data System (ADS)
Chung, Chun Kit; Arahata, Emiko; Kato, Yusuke
2014-03-01
We study forces on two-dimensional superconductors with a single moving vortex based on a recent fully self-consistent calculation of DC conductivity in an s-wave superconductor (E. Arahata and Y. Kato, arXiv:1310.0566). By considering momentum balance of the whole liquid, we attempt to identify various contributions to the total transverse force on the vortex. This provides an estimation of the effective Magnus force based on the quasiclassical theory generalized by Kita [T. Kita, Phys. Rev. B, 64, 054503 (2001)], which allows for the Hall effect in vortex states.
Personnel Retention Policy and Force Quality: Twice-Passed Staff Sergeants
2015-06-01
addressing a manager’s impact on the success of an organization, which attempts to quantify the value of, and variation between, managers. Goodall and...in measuring the value of a manager ( Goodall & Pogrebna, 2015). Branch, Hanushek, and Rivkin (2013) estimate the standard deviation of school...literature reveals leadership effects varying from 4 to 40 percent across a range of industries ( Goodall & Pogrebna, 2015). Although there is no attempt to
Schaffer, Ayal; Isometsä, Erkki T; Tondo, Leonardo; Moreno, Doris H; Sinyor, Mark; Kessing, Lars Vedel; Turecki, Gustavo; Weizman, Abraham; Azorin, Jean-Michel; Ha, Kyooseob; Reis, Catherine; Cassidy, Frederick; Goldstein, Tina; Rihmer, Zoltán; Beautrais, Annette; Chou, Yuan-Hwa; Diazgranados, Nancy; Levitt, Anthony J; Zarate, Carlos A; Yatham, Lakshmi
2015-09-01
Bipolar disorder is associated with elevated risk of suicide attempts and deaths. Key aims of the International Society for Bipolar Disorders Task Force on Suicide included examining the extant literature on epidemiology, neurobiology and pharmacotherapy related to suicide attempts and deaths in bipolar disorder. Systematic review of studies from 1 January 1980 to 30 May 2014 examining suicide attempts or deaths in bipolar disorder, with a specific focus on the incidence and characterization of suicide attempts and deaths, genetic and non-genetic biological studies and pharmacotherapy studies specific to bipolar disorder. We conducted pooled, weighted analyses of suicide rates. The pooled suicide rate in bipolar disorder is 164 per 100,000 person-years (95% confidence interval = [5, 324]). Sex-specific data on suicide rates identified a 1.7:1 ratio in men compared to women. People with bipolar disorder account for 3.4-14% of all suicide deaths, with self-poisoning and hanging being the most common methods. Epidemiological studies report that 23-26% of people with bipolar disorder attempt suicide, with higher rates in clinical samples. There are numerous genetic associations with suicide attempts and deaths in bipolar disorder, but few replication studies. Data on treatment with lithium or anticonvulsants are strongly suggestive for prevention of suicide attempts and deaths, but additional data are required before relative anti-suicide effects can be confirmed. There were limited data on potential anti-suicide effects of treatment with antipsychotics or antidepressants. This analysis identified a lower estimated suicide rate in bipolar disorder than what was previously published. Understanding the overall risk of suicide deaths and attempts, and the most common methods, are important building blocks to greater awareness and improved interventions for suicide prevention in bipolar disorder. Replication of genetic findings and stronger prospective data on treatment options are required before more decisive conclusions can be made regarding the neurobiology and specific treatment of suicide risk in bipolar disorder. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Schaffer, Ayal; Isometsä, Erkki T; Tondo, Leonardo; Moreno, Doris H; Sinyor, Mark; Kessing, Lars Vedel; Turecki, Gustavo; Weizman, Abraham; Azorin, Jean-Michel; Ha, Kyooseob; Reis, Catherine; Cassidy, Frederick; Goldstein, Tina; Rihmer, Zoltán; Beautrais, Annette; Chou, Yuan-Hwa; Diazgranados, Nancy; Levitt, Anthony J; Zarate, Carlos A; Yatham, Lakshmi
2016-01-01
Objectives Bipolar disorder is associated with elevated risk of suicide attempts and deaths. Key aims of the International Society for Bipolar Disorders Task Force on Suicide included examining the extant literature on epidemiology, neurobiology and pharmacotherapy related to suicide attempts and deaths in bipolar disorder. Methods Systematic review of studies from 1 January 1980 to 30 May 2014 examining suicide attempts or deaths in bipolar disorder, with a specific focus on the incidence and characterization of suicide attempts and deaths, genetic and non-genetic biological studies and pharmacotherapy studies specific to bipolar disorder. We conducted pooled, weighted analyses of suicide rates. Results The pooled suicide rate in bipolar disorder is 164 per 100,000 person-years (95% confidence interval = [5, 324]). Sex-specific data on suicide rates identified a 1.7:1 ratio in men compared to women. People with bipolar disorder account for 3.4–14% of all suicide deaths, with self-poisoning and hanging being the most common methods. Epidemiological studies report that 23–26% of people with bipolar disorder attempt suicide, with higher rates in clinical samples. There are numerous genetic associations with suicide attempts and deaths in bipolar disorder, but few replication studies. Data on treatment with lithium or anticonvulsants are strongly suggestive for prevention of suicide attempts and deaths, but additional data are required before relative anti-suicide effects can be confirmed. There were limited data on potential anti-suicide effects of treatment with antipsychotics or antidepressants. Conclusion This analysis identified a lower estimated suicide rate in bipolar disorder than what was previously published. Understanding the overall risk of suicide deaths and attempts, and the most common methods, are important building blocks to greater awareness and improved interventions for suicide prevention in bipolar disorder. Replication of genetic findings and stronger prospective data on treatment options are required before more decisive conclusions can be made regarding the neurobiology and specific treatment of suicide risk in bipolar disorder. PMID:26185269
Attempts by Descartes and Roberval to evaluate the centre of oscillation of compound pendulums.
Capecchi, Danilo
2014-01-01
This paper re-examines the first documented attempts to establish the quantitative law of motion for a body oscillating about a fixed axis (a compound pendulum). This is quite a complex problem as weight and motion are not concentrated in a point, but are spread over a volume. Original documents by René Descartes and Gilles Personne de Roberval, who made the first contributions to solving the problem, are discussed. The two scientists had important insights into the problem which, although they were incomplete, nevertheless somehow complemented each other - at least when seen from the viewpoint of modern mechanics. Descartes was right in considering only the absolute value of the inertia forces, Roberval was right in assuming that the force of gravity should also be taken into account.
Force Modelling in Orthogonal Cutting Considering Flank Wear Effect
NASA Astrophysics Data System (ADS)
Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.
2017-05-01
In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.
2015-06-01
words) Attempting different approaches to explore the best practice of optimizing mobile security and productivity is necessary to improve the...INTENTIONALLY LEFT BLANK iv ABSTRACT Attempting different approaches to explore the best practice of optimizing mobile security and productivity is...incredible kindness and unfathomable generosity. I am grateful to have watched Super Bowl XLIX in your living room, washed dirty clothes in your laundry room
Achieving equal opportunity in NASA: An assessment of needs and recommendations for action
NASA Technical Reports Server (NTRS)
1976-01-01
Measures designed by NASA to improve its equal opportunity program are reported. Attempts made to increase the ratios and level of placement of women and minority men in the work force were emphasized, upward mobility for those employees already in the work force was also studied. Ways for improving the track record for NASA's equal opportunity profile are recommended.
ERIC Educational Resources Information Center
Schnell, James A.
The U.S. Air Force attempts to minimize racial and sexual discrimination by incorporating federal laws into its infrastructure and by exercising consistent enforcement of these laws. This leads to a development of cross-cultural communication among various subcultures within a multi-ethnic environment. The Social Actions Office at each Air Force…
From the Red Ball Express to the Objective Force: A Quest for Logistics Transformation
2007-03-30
not support. In order to streamline materiel management to the force, Army Sustainment Command developed their Distribution Management Center...material management mission and the establishment and transfer of efforts to the Distribution Management Center, the Army Sustainment Command...attempt to bridge the capability gap. As the Distribution Management Center stands up at Rock Island Arsenal, they will assume responsibility for each
ERIC Educational Resources Information Center
National Inst. of Standards and Technology, Gaithersburg, MD.
Intended for public comment and discussion, this document is the second volume of papers in which the Information Infrastructure Task Force has attempted to articulate in clear terms, with sufficient detail, how improvements in the National Information Infrastructure (NII) can help meet other social goals. These are not plans to be enacted, but…
Validation and Improvement of Reliability Methods for Air Force Building Systems
focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that
DefenseLink.mil - Special Report: Battle of the Bulge
World War II in a final desperate attempt to break and defeat Allied forces. The ensuing battle, fought the largest land battle involving American Forces in World War II. More than a million Allied troops lines of the Battle of the Bulge during World War II. The now 86-year-old returned to one of his former
PP and PS interferometric images of near-seafloor sediments
Haines, S.S.
2011-01-01
I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.
Schaffer, Ayal; Isometsä, Erkki T; Azorin, Jean-Michel; Cassidy, Frederick; Goldstein, Tina; Rihmer, Zoltán; Sinyor, Mark; Tondo, Leonardo; Moreno, Doris H; Turecki, Gustavo; Reis, Catherine; Kessing, Lars Vedel; Ha, Kyooseob; Weizman, Abraham; Beautrais, Annette; Chou, Yuan-Hwa; Diazgranados, Nancy; Levitt, Anthony J; Zarate, Carlos A; Yatham, Lakshmi
2015-11-01
Many factors influence the likelihood of suicide attempts or deaths in persons with bipolar disorder. One key aim of the International Society for Bipolar Disorders Task Force on Suicide was to summarize the available literature on the presence and magnitude of effect of these factors. A systematic review of studies published from 1 January 1980 to 30 May 2014 identified using keywords 'bipolar disorder' and 'suicide attempts or suicide'. This specific paper examined all reports on factors putatively associated with suicide attempts or suicide deaths in bipolar disorder samples. Factors were subcategorized into: (1) sociodemographics, (2) clinical characteristics of bipolar disorder, (3) comorbidities, and (4) other clinical variables. We identified 141 studies that examined how 20 specific factors influenced the likelihood of suicide attempts or deaths. While the level of evidence and degree of confluence varied across factors, there was at least one study that found an effect for each of the following factors: sex, age, race, marital status, religious affiliation, age of illness onset, duration of illness, bipolar disorder subtype, polarity of first episode, polarity of current/recent episode, predominant polarity, mood episode characteristics, psychosis, psychiatric comorbidity, personality characteristics, sexual dysfunction, first-degree family history of suicide or mood disorders, past suicide attempts, early life trauma, and psychosocial precipitants. There is a wealth of data on factors that influence the likelihood of suicide attempts and suicide deaths in people with bipolar disorder. Given the heterogeneity of study samples and designs, further research is needed to replicate and determine the magnitude of effect of most of these factors. This approach can ultimately lead to enhanced risk stratification for patients with bipolar disorder. © The Royal Australian and New Zealand College of Psychiatrists 2015.
Schaffer, Ayal; Isometsä, Erkki T; Azorin, Jean-Michel; Cassidy, Frederick; Goldstein, Tina; Rihmer, Zoltán; Sinyor, Mark; Tondo, Leonardo; Moreno, Doris H; Turecki, Gustavo; Reis, Catherine; Kessing, Lars Vedel; Ha, Kyooseob; Weizman, Abraham; Beautrais, Annette; Chou, Yuan-Hwa; Diazgranados, Nancy; Levitt, Anthony J; Zarate, Carlos A; Yatham, Lakshmi
2018-01-01
Objectives Many factors influence the likelihood of suicide attempts or deaths in persons with bipolar disorder. One key aim of the International Society for Bipolar Disorders Task Force on Suicide was to summarize the available literature on the presence and magnitude of effect of these factors. Methods A systematic review of studies published from 1 January 1980 to 30 May 2014 identified using keywords ‘bipolar disorder’ and ‘suicide attempts or suicide’. This specific paper examined all reports on factors putatively associated with suicide attempts or suicide deaths in bipolar disorder samples. Factors were subcategorized into: (1) sociodemographics, (2) clinical characteristics of bipolar disorder, (3) comorbidities, and (4) other clinical variables. Results We identified 141 studies that examined how 20 specific factors influenced the likelihood of suicide attempts or deaths. While the level of evidence and degree of confluence varied across factors, there was at least one study that found an effect for each of the following factors: sex, age, race, marital status, religious affiliation, age of illness onset, duration of illness, bipolar disorder subtype, polarity of first episode, polarity of current/recent episode, predominant polarity, mood episode characteristics, psychosis, psychiatric comorbidity, personality characteristics, sexual dysfunction, first-degree family history of suicide or mood disorders, past suicide attempts, early life trauma, and psychosocial precipitants. Conclusion There is a wealth of data on factors that influence the likelihood of suicide attempts and suicide deaths in people with bipolar disorder. Given the heterogeneity of study samples and designs, further research is needed to replicate and determine the magnitude of effect of most of these factors. This approach can ultimately lead to enhanced risk stratification for patients with bipolar disorder. PMID:26175498
Birth control sabotage and forced sex: experiences reported by women in domestic violence shelters.
Thiel de Bocanegra, Heike; Rostovtseva, Daria P; Khera, Satin; Godhwani, Nita
2010-05-01
Women who experience intimate partner violence often experience birth control sabotage, forced sex, and partner's unwillingness to use condoms. We interviewed 53 women at four domestic violence shelters. Participants reported that their abusive partners frequently refused to use condoms, impeded them from accessing health care, and subjected them to birth control sabotage, infidelity, and forced sex. However, women also reported strategies to counteract these actions, particularly against birth control sabotage and attempts to force them to abort or continue a pregnancy. Domestic violence counselors can focus on these successful strategies to validate coping skills and build self-esteem.
Carl Neumann versus Rudolf Clausius on the propagation of electrodynamic potentials
NASA Astrophysics Data System (ADS)
Archibald, Thomas
1986-09-01
In the late 1860's, German electromagnetic theorists employing W. Weber's velocity-dependent force law were forced to confront the issue of energy conservation. One attempt to formulate a conservation law for such forces was due to Carl Neumann, who introduced a model employing retarded potentials in 1868. Rudolf Clausius quickly pointed out certain problems with the physical interpretation of Neumann's mathematical formalism. The debate between the two men continued until the 1880's and illustrates the strictures facing mathematical approaches to physical problems during this prerelativistic, pre-Maxwellian period.
2015-05-21
from the front in every endeavor. The US Navy has formerly adopted the mantra of, “A global force for Good , deployed for the betterment of humanity...technocratic theme in a global activist status. The best example of an attempt to demilitarize a service for public consumption is that of “America’s...Army Army Strong US Navy A global force for good US Air Force It’s not science fiction, it’s what we do everyday US Marine Corps America’s forward
Southeast Asia Report, No. 1283.
1983-05-05
the national front of resistance , you will achieve new successes in the struggle against the Camp David plot in order to repel the aggression of the...fact stated earlier when its deputy foreign minister, Ha Van Lau, confirmed Vietnam’s intentions to pursue the resistance forces across the border...into Kampuchea more than four years ago, is yet another attempt by Vietnam to undercut the growing strength of the resistance forces which had
Total Force Restructuring Under Sequestration and Austere Budget Reductions
2013-03-01
jointly released guidance on January 16, 2013 that addresses near-term expenditure reductions in an attempt to mitigate future risks .9 Guidance...implication for the Army is that the 6 force structure decisions are still forthcoming and will bear much risk directly related to future levels of...their benefits and risks . The alternatives are also underpinned by assumptions that are designed to enhance their scope and not provide limitations
Chile, 1964-74: The Successes and Failures of Reformism
1975-09-22
upon mining for a majur portion or government revenue, saddled with an agricultural system characterized by the inequities and inefficiencies of...the most votes Allende and Alessandri. The Alessandri forces , in a gambit to upset traditional practice and invalidate the election, attempted a...and Alknde’s contradictory "peaceful transition to socialism" bring the country to the brink of civil war, the Armed Forces decided to dispense with
The Aerodynamic Forces on Airship Hulls
NASA Technical Reports Server (NTRS)
Munk, M. M.
1979-01-01
The new method for making computations in connection with the study of rigid airships, which was used in the investigation of Navy's ZR-1 by the special subcommittee of the National Advisory Committee for Aeronautics appointed for this purpose is presented. The general theory of the air forces on airship hulls of the type mentioned is described and an attempt was made to develop the results from the very fundamentals of mechanics.
2015-07-01
pace of special operations deployments, but opportunities may exist to better balance the workload across the joint force because activities...executes funding in operation and maintenance; procurement; research , development, test, and evaluation; and military construction accounts.13 SOCOM...regional awareness. Moreover, officials noted that increases in civilian positions were driven partly by DOD’s attempts to rebalance workload and become a
Technology, FID, and Afghanistan: A Model for Aviation Capacity
2017-04-05
Force. Through case study, it analyzes how FID definitions and goals eroded under political pressure. Following this, Afghanistan is used to show...national aviation technology capacity, where these nations are weak, and which societal strengths to leverage. Case studies demonstrate how it can be...the other way around. In the case of Afghanistan, the U.S. Air Force (USAF) attempted to cultivate advanced aviation capabilities within a low
NASA Astrophysics Data System (ADS)
Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît
2018-01-01
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît
2018-01-07
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Molecular Dynamics Simulations of Protein-Ligand Complexes in Near Physiological Conditions
NASA Astrophysics Data System (ADS)
Wambo, Thierry Oscar
Proteins are important molecules for their key functions. However, under certain circumstances, the function of these proteins needs to be regulated to keep us healthy. Ligands are small molecules often used to modulate the function of proteins. The binding affinity is a quantitative measure of how strong the ligand will modulate the function of the protein: a strong binding affinity will highly impact the performance of the protein. It becomes clear that it is critical to have appropriate techniques to accurately compute the binding affinity. The most difficult task in computer simulations is how to efficiently sample the space spanned by the ligand during the binding process. In this work, we have developed some schemes to compute the binding affinity of a ligand to a protein, and of a metal ion to a protein. Application of these techniques to some complexes yield results in agreement with experimental values. These methods are a brute force approach and make no assumption other than that the complexes are governed by the force field used. Specifically, we computed the free energy of binding between (1) human carbonic anhydrase II and the drug acetazolamide (hcaII-AZM), (2) human carbonic anhydrase II and the zinc ion (hcaII-Zinc), and (3) beta-lactoglobulin and five fatty acids complexes (BLG-FAs). We found the following free energies of binding in unit of kcal/mol: -12.96 +/-2.44 (-15.74) for hcaII-Zinc complex, -5.76+/-0.76 (-5.57) for BLG-OCA , -4.44+/-1.08 (-5.22) for BLG-DKA,-6.89+/-1.25 (-7.24) for BLG-DAO, -8.57+/-0.82 (-8.14) for BLG-MYR, -8.99+/-0.87 (-8.72) for BLG-PLM, and -11.87+/-1.8 (-10.8) for hcaII-AZM. The values inside the parentheses are experimental results. The simulations and quantitative analysis of each system provide interesting insights into the interactions between each entity and helps us to better understand the dynamics of these systems.
The Space Shuttle: An Attempt at Low-Cost, Routine Access to Space
1990-09-01
thinking on new heavy-lift launch systems. The thesis objective is to show the Space Shuttle was an attempt at developing a routine, low-cost access to... development costs were those used to create a launch facility at Vandenburg Air Force Base. DOD agreed in 1971 not to develop any new launch vehicles...booster. • To reduce the design weight of the Shuttle so as not to decrease the 65,000 pound payload capability. * To develop a new thermal protection
Toward an Integration of Deep Learning and Neuroscience
Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P.
2016-01-01
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. PMID:27683554
Wang, Yong; Tang, Chun; Wang, Erkang; Wang, Jin
2012-01-01
An increasing number of biological machines have been revealed to have more than two macroscopic states. Quantifying the underlying multiple-basin functional landscape is essential for understanding their functions. However, the present models seem to be insufficient to describe such multiple-state systems. To meet this challenge, we have developed a coarse grained triple-basin structure-based model with implicit ligand. Based on our model, the constructed functional landscape is sufficiently sampled by the brute-force molecular dynamics simulation. We explored maltose-binding protein (MBP) which undergoes large-scale domain motion between open, apo-closed (partially closed) and holo-closed (fully closed) states responding to ligand binding. We revealed an underlying mechanism whereby major induced fit and minor population shift pathways co-exist by quantitative flux analysis. We found that the hinge regions play an important role in the functional dynamics as well as that increases in its flexibility promote population shifts. This finding provides a theoretical explanation of the mechanistic discrepancies in PBP protein family. We also found a functional “backtracking” behavior that favors conformational change. We further explored the underlying folding landscape in response to ligand binding. Consistent with earlier experimental findings, the presence of ligand increases the cooperativity and stability of MBP. This work provides the first study to explore the folding dynamics and functional dynamics under the same theoretical framework using our triple-basin functional model. PMID:22532792
Symmetric encryption algorithms using chaotic and non-chaotic generators: A review
Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.
2015-01-01
This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561
GEMINI: a computationally-efficient search engine for large gene expression datasets.
DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick
2016-02-24
Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.
Medical data sheet in safe havens - A tri-layer cryptic solution.
Praveenkumar, Padmapriya; Amirtharajan, Rengarajan; Thenmozhi, K; Balaguru Rayappan, John Bosco
2015-07-01
Secured sharing of the diagnostic reports and scan images of patients among doctors with complementary expertise for collaborative treatment will help to provide maximum care through faster and decisive decisions. In this context, a tri-layer cryptic solution has been proposed and implemented on Digital Imaging and Communications in Medicine (DICOM) images to establish a secured communication for effective referrals among peers without compromising the privacy of patients. In this approach, a blend of three cryptic schemes, namely Latin square image cipher (LSIC), discrete Gould transform (DGT) and Rubik׳s encryption, has been adopted. Among them, LSIC provides better substitution, confusion and shuffling of the image blocks; DGT incorporates tamper proofing with authentication; and Rubik renders a permutation of DICOM image pixels. The developed algorithm has been successfully implemented and tested in both the software (MATLAB 7) and hardware Universal Software Radio Peripheral (USRP) environments. Specifically, the encrypted data were tested by transmitting them through an additive white Gaussian noise (AWGN) channel model. Furthermore, the sternness of the implemented algorithm was validated by employing standard metrics such as the unified average changing intensity (UACI), number of pixels change rate (NPCR), correlation values and histograms. The estimated metrics have also been compared with the existing methods and dominate in terms of large key space to defy brute force attack, cropping attack, strong key sensitivity and uniform pixel value distribution on encryption. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Community-aware task allocation for social networked multiagent systems.
Wang, Wanyuan; Jiang, Yichuan
2014-09-01
In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Tenti, Lorenzo; Maynau, Daniel; Angeli, Celestino; Calzado, Carmen J
2016-07-21
A new strategy based on orthogonal valence-bond analysis of the wave function combined with intermediate Hamiltonian theory has been applied to the evaluation of the magnetic coupling constants in two AF systems. This approach provides both a quantitative estimate of the J value and a detailed analysis of the main physical mechanisms controlling the coupling, using a combined perturbative + variational scheme. The procedure requires a selection of the dominant excitations to be treated variationally. Two methods have been employed: a brute-force selection, using a logic similar to that of the CIPSI approach, or entanglement measures, which identify the most interacting orbitals in the system. Once a reduced set of excitations (about 300 determinants) is established, the interaction matrix is dressed at the second-order of perturbation by the remaining excitations of the CI space. The diagonalization of the dressed matrix provides J values in good agreement with experimental ones, at a very low-cost. This approach demonstrates the key role of d → d* excitations in the quantitative description of the magnetic coupling, as well as the importance of using an extended active space, including the bridging ligand orbitals, for the binuclear model of the intermediates of multicopper oxidases. The method is a promising tool for dealing with complex systems containing several active centers, as an alternative to both pure variational and DFT approaches.
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Reference ability neural networks and behavioral performance across the adult life span.
Habeck, Christian; Eich, Teal; Razlighi, Ray; Gazes, Yunglin; Stern, Yaakov
2018-05-15
To better understand the impact of aging, along with other demographic and brain health variables, on the neural networks that support different aspects of cognitive performance, we applied a brute-force search technique based on Principal Components Analysis to derive 4 corresponding spatial covariance patterns (termed Reference Ability Neural Networks -RANNs) from a large sample of participants across the age range. 255 clinically healthy, community-dwelling adults, aged 20-77, underwent fMRI while performing 12 tasks, 3 tasks for each of the following cognitive reference abilities: Episodic Memory, Reasoning, Perceptual Speed, and Vocabulary. The derived RANNs (1) showed selective activation to their specific cognitive domain and (2) correlated with behavioral performance. Quasi out-of-sample replication with Monte-Carlo 5-fold cross validation was built into our approach, and all patterns indicated their corresponding reference ability and predicted performance in held-out data to a degree significantly greater than chance level. RANN-pattern expression for Episodic Memory, Reasoning and Vocabulary were associated selectively with age, while the pattern for Perceptual Speed showed no such age-related influences. For each participant we also looked at residual activity unaccounted for by the RANN-pattern derived for the cognitive reference ability. Higher residual activity was associated with poorer brain-structural health and older age, but -apart from Vocabulary-not with cognitive performance, indicating that older participants with worse brain-structural health might recruit alternative neural resources to maintain performance levels. Copyright © 2018 Elsevier Inc. All rights reserved.
A new class of enhanced kinetic sampling methods for building Markov state models
NASA Astrophysics Data System (ADS)
Bhoutekar, Arti; Ghosh, Susmita; Bhattacharya, Swati; Chatterjee, Abhijit
2017-10-01
Markov state models (MSMs) and other related kinetic network models are frequently used to study the long-timescale dynamical behavior of biomolecular and materials systems. MSMs are often constructed bottom-up using brute-force molecular dynamics (MD) simulations when the model contains a large number of states and kinetic pathways that are not known a priori. However, the resulting network generally encompasses only parts of the configurational space, and regardless of any additional MD performed, several states and pathways will still remain missing. This implies that the duration for which the MSM can faithfully capture the true dynamics, which we term as the validity time for the MSM, is always finite and unfortunately much shorter than the MD time invested to construct the model. A general framework that relates the kinetic uncertainty in the model to the validity time, missing states and pathways, network topology, and statistical sampling is presented. Performing additional calculations for frequently-sampled states/pathways may not alter the MSM validity time. A new class of enhanced kinetic sampling techniques is introduced that aims at targeting rare states/pathways that contribute most to the uncertainty so that the validity time is boosted in an effective manner. Examples including straightforward 1D energy landscapes, lattice models, and biomolecular systems are provided to illustrate the application of the method. Developments presented here will be of interest to the kinetic Monte Carlo community as well.
NASA Astrophysics Data System (ADS)
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A; Al-Muhtadi, Jalal; Rodrigues, Joel J P C; Khalil, Mohammed Sayim; Ali Ahmed, Adel
2016-03-31
The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks.
Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A.; Al-Muhtadi, Jalal; Rodrigues, Joel J. P. C.; Khalil, Mohammed Sayim; Ali Ahmed, Adel
2016-01-01
The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks. PMID:27043572
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E
2016-08-12
Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
Tanks Versus Infantry in a Smoke Environment (TISE)
1978-08-01
maneuvering toward stationary armor vehicles in an attempt to detect and recognize them. Finally, Part IV trials were limited free - play , two-sided...long. Each armor vehicle lane (average 40 meters in width) was marked on the ground by white tape. These markings were removed for part IV free - play trials...recognize them. Data were collected from both sides. (4) Part IV was free - play , force-on-force engagement trials. Defensive positions were tactically
Expectations and Realities in Academic Biology.
ERIC Educational Resources Information Center
Nanney, David L.
1988-01-01
Provides a historical look at the development of academic biology. Attempts to project some presently recognized forces and organizational structures influencing the development of the field. Differentiates how the discipline is now as compared with the past. (TW)
The descent of ant: field-measured performance of gliding ants.
Munk, Yonatan; Yanoviak, Stephen P; Koehl, M A R; Dudley, Robert
2015-05-01
Gliding ants avoid predatory attacks and potentially mortal consequences of dislodgement from rainforest canopy substrates by directing their aerial descent towards nearby tree trunks. The ecologically relevant measure of performance for gliding ants is the ratio of net horizontal to vertical distance traveled over the course of a gliding trajectory, or glide index. To study variation in glide index, we measured three-dimensional trajectories of Cephalotes atratus ants gliding in natural rainforest habitats. We determined that righting phase duration, glide angle, and path directness all significantly influence variation in glide index. Unsuccessful landing attempts result in the ant bouncing off its target and being forced to make a second landing attempt. Our results indicate that ants are not passive gliders and that they exert active control over the aerodynamic forces they experience during their descent, despite their apparent lack of specialized control surfaces. © 2015. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Edgerton, V. Reggie; Roy, Roland R.; Hodgson, John A.
1993-01-01
The 6 weeks preflight activities of the Cosmos project during 1993 included: modification of EMG connector to improve the reliability of EMG recording; 24 hour cage activity recording from all but two of the flight animals (monkeys); attempts to record from flight candidates during foot lever task; and force transducer calibrations on all flight candidate animals. The 4 week postflight recordings included: postflight recordings from flight animals; postflight recordings on 3 control (non-flight) animals; postflight recalibration of force transducers on 1 flight and 4 control (non-flight) animals; and attempts to record EMG and video data from the flight animals during postflight locomotion and postural activity. The flight EMG recordings suggest that significant changes in muscle control may occur in spaceflight. It is also clear from recordings that levels of EMG recorded during spaceflight can attain values similar to those measured on earth. Amplifier gain settings should therefore probably not be changed for spaceflight.
Bryan, Craig J; David Rudd, M; Wertenberger, Evelyn; Etienne, Neysa; Ray-Sannerud, Bobbie N; Morrow, Chad E; Peterson, Alan L; Young-McCaughon, Stacey
2014-04-01
Newer approaches for understanding suicidal behavior suggest the assessment of suicide-specific beliefs and cognitions may improve the detection and prediction of suicidal thoughts and behaviors. The Suicide Cognitions Scale (SCS) was developed to measure suicide-specific beliefs, but it has not been tested in a military setting. Data were analyzed from two separate studies conducted at three military mental health clinics (one U.S. Army, two U.S. Air Force). Participants included 175 active duty Army personnel with acute suicidal ideation and/or a recent suicide attempt referred for a treatment study (Sample 1) and 151 active duty Air Force personnel receiving routine outpatient mental health care (Sample 2). In both samples, participants completed self-report measures and clinician-administered interviews. Follow-up suicide attempts were assessed via clinician-administered interview for Sample 1. Statistical analyses included confirmatory factor analysis, between-group comparisons by history of suicidality, and generalized regression modeling. Two latent factors were confirmed for the SCS: Unloveability and Unbearability. Each demonstrated good internal consistency, convergent validity, and divergent validity. Both scales significantly predicted current suicidal ideation (βs >0.316, ps <0.002) and significantly differentiated suicide attempts from nonsuicidal self-injury and control groups (F(6, 286)=9.801, p<0.001). Both scales significantly predicted future suicide attempts (AORs>1.07, ps <0.050) better than other risk factors. Self-report methodology, small sample sizes, predominantly male samples. The SCS is a reliable and valid measure that predicts suicidal ideation and suicide attempts among military personnel better than other well-established risk factors. Copyright © 2014 Elsevier B.V. All rights reserved.
2003-06-10
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the Boeing Delta II rocket and Mars Exploration Rover 2 (MER-A) are ready for the third launch attempt after weather concerns postponed earlier attempts. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
Measurement of the Casimir Force between Two Spheres
NASA Astrophysics Data System (ADS)
Garrett, Joseph L.; Somers, David A. T.; Munday, Jeremy N.
2018-01-01
Complex interaction geometries offer a unique opportunity to modify the strength and sign of the Casimir force. However, measurements have traditionally been limited to sphere-plate or plate-plate configurations. Prior attempts to extend measurements to different geometries relied on either nanofabrication techniques that are limited to only a few materials or slight modifications of the sphere-plate geometry due to alignment difficulties of more intricate configurations. Here, we overcome this obstacle to present measurements of the Casimir force between two gold spheres using an atomic force microscope. Force measurements are alternated with topographical scans in the x -y plane to maintain alignment of the two spheres to within approximately 400 nm (˜1 % of the sphere radii). Our experimental results are consistent with Lifshitz's theory using the proximity force approximation (PFA), and corrections to the PFA are bounded using nine sphere-sphere and three sphere-plate measurements with spheres of varying radii.
Hindrances to precise recovery of cellular forces in fibrous biopolymer networks.
Zhang, Yunsong; Feng, Jingchen; Heizler, Shay I; Levine, Herbert
2018-01-11
How cells move through the three-dimensional extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on flat surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. However, the recovery of forces in fibrous biopolymer networks may suffer from large errors. Here, within the framework of lattice-based models, we explore possible issues in force recovery by solving the inverse problem: how can one determine the forces cells exert to their surroundings from the deformation of the ECM? Our results indicate that irregular cell traction patterns, the uncertainty of local fiber stiffness, the non-affine nature of ECM deformations and inadequate knowledge of network topology will all prevent the precise force determination. At the end, we discuss possible ways of overcoming these difficulties.
Hindrances to precise recovery of cellular forces in fibrous biopolymer networks
NASA Astrophysics Data System (ADS)
Zhang, Yunsong; Feng, Jingchen; Heizler, Shay I.; Levine, Herbert
2018-03-01
How cells move through the three-dimensional extracellular matrix (ECM) is of increasing interest in attempts to understand important biological processes such as cancer metastasis. Just as in motion on flat surfaces, it is expected that experimental measurements of cell-generated forces will provide valuable information for uncovering the mechanisms of cell migration. However, the recovery of forces in fibrous biopolymer networks may suffer from large errors. Here, within the framework of lattice-based models, we explore possible issues in force recovery by solving the inverse problem: how can one determine the forces cells exert to their surroundings from the deformation of the ECM? Our results indicate that irregular cell traction patterns, the uncertainty of local fiber stiffness, the non-affine nature of ECM deformations and inadequate knowledge of network topology will all prevent the precise force determination. At the end, we discuss possible ways of overcoming these difficulties.
NASA Astrophysics Data System (ADS)
Rivera, Susana
Throughout the last century, since the last decades of the XIX century, until present day, there had been many attempts to achieve the unification of the Forces of Nature. First unification was done by James Clerk Maxwell, with his Electromagnetic Theory. Then Max Plank developed his Quantum Theory. In 1905, Albert Einstein gave birth to the Special Relativity Theory, and in 1916 he came out with his General Relativity Theory. He noticed that there was an evident parallelism between the Gravitational Force, and the Electromagnetic Force. So, he tried to unify these forces of Nature. But Quantum Theory interposed on his way. On the 1940’s it had been developed the Quantum Electrodynamics (QED), and with it, the unified field theory had an arise interest. On the 60’s and 70’s there was developed the Quantum Chromodynamics (QCD). Along with these theories came the discovery of the strong interaction force and weak interaction force. And though there had been many attempts to unify all these forces of the nature, it could only be achieved the Unification of strong interaction, weak interaction and Electromagnetic Force. On the late 80”s and throughout the last two decades, theories such as “super-string theory”, “or the “M-theory”, among others, groups of Scientists, had been doing grand efforts and finally they came out with the unification of the forces of nature, being the only limitation the use of more than 11 dimensions. Using an ingenious mathematical tool known as the super symmetries, based on the Kaluza - Klein work, they achieve this goal. The strings of these theories are in the rank of 10-33 m. Which make them undetectable. There are many other string theories. The GEUFT theory is based on the existence of concentrated energy lines, which vibrates, expands and contracts, submitting and absorbing energy, matter and antimatter, and which yields a determined geometry, that gives as a result the formation of stars, galaxies, nebulae, clusters on the Macrocosmic level, and that allows the formation of fundamental particles on the Microcosmic level. The strings are described by a function named Symbiosis (σ), which depends on four energetic contributions: (1) Radiation Energy (2) Plasma Energy (3) Conducted Flux Energy and (4) Mass Energy. There is an intimate relation between them, and depending on the value they have at a certain moment and at a certain time, the string dynamics and its geometry are settled. That means that symbiosis describes the strings state in any point of the geometer - energy field. σ = F [Er(σ), Ep(σ), Ef(σ), Em(σ)] (1) This work is an attempt to achieve the unification of the forces of nature, based on the existence of a four dimension Universe.
Factors Affecting the Role and Employment of Peacekeeping Forces in Africa South of the Sahara.
1982-12-01
reinforcing. Although abstract, the w concept is similar to that used by Emile Durkheim in his discussion of social solidarity. I rhe concept of...I Forces Involved: Support for Congolese air power: Ethiopia provided Emil a jet fighter and possibly pilots; Ghana, seven pilots and two air traffic...problems Durkheim 3 ’ encountered when he attempted to measure social solidarity. His earlier efforts to social solidarity directly through an
Sitting with the Enemy: How to Integrate a Former Violent Group into Government
It has been 17 years since the deployment of the United States armed forces to Afghanistan on 7 October 2001, and American military forces continue...end the violence. Therefore, this thesis analyzes how to integrate a former violent group into government, and how those processes can apply to...attempt to stabilize the government of Afghanistan. This is problematic but other states have integrated former violent groups into government as a means to
Air Warfare and Air Base Defense 1914-1973
1988-01-01
ground commanders diluted German efforts. Rommel described the prob- lem in organizational terms: " One thing that worked very seriously against us was...exerted severe pressure on the Marines. Japanese attempts at reinforcing their garri - son were constant and could be defeated only by air attacks on the...and in many cases pure chance that favors one side over the other. In response to a request by the Air Force Director of Plans, the Office of Air Force
2010-06-01
must be considered when forces are (notionally) allocated . The model in this thesis will attempt to show the amount of time each person in the 2...Command and Control organization will allocate to this mission. This thesis then intends to demonstrate that an organizational structure that...Indian Ocean. Focusing on this geographic area helps to frame the structure of the Department of Defense forces that monitor, assess, allocate
Acupuncture for the Trauma Spectrum Response: Scientific Foundations, Challenges to Implementation
2011-01-01
The current approach to management of these injuries follows the standard medical model that attempts to isolate the pathophysiological locations and...the private views of the author and are not to be construed as official or as reflecting the views of the United States Air Force Medical Corps, the...Air Force at large, or the Department of Defense. The author indicates that he does not have any conflicts of interest. MEDICAL ACUPUNCTURE Volume 23
Fighting the War above Iraq. Employing Space Forces to Defeat an Insurgency
2007-05-01
Before discussing the part space forces can play, we must first validate that current operations in Iraq actu ally require isolating the physical ...borders.”24 In order to overcome the previous challenges and achieve this objec tive, we can look for past attempts to isolate the physical ... physical battlespace through surveil lance will likely be countered. Space Forces’ Role We have seen that the need to isolate the physical battle
The Forced Evacuation of the Japanese Minority During World War II
ERIC Educational Resources Information Center
Miyamoto, S. Frank
1973-01-01
Attempts to explain in extremely abbreviated form what caused the evacuation and how the Japanese minority reacted to their exclusion and rejection, focusing on three general causes: collective dispositions, situational factors, and collective interaction. (Author/JM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covey, Curt; Hoffman, Forrest
2008-10-02
This project will quantify selected components of climate forcing due to changes in the terrestrial biosphere over the period 1948-2004, as simulated by the climate / carboncycle models participating in C-LAMP (the Carbon-Land Model Intercomparison Project; see http://www.climatemodeling.org/c-lamp). Unlike other C-LAMP projects that attempt to close the carbon budget, this project will focus on the contributions of individual biomes in terms of the resulting climate forcing. Bala et al. (2007) used a similar (though more comprehensive) model-based technique to assess and compare different components of biospheric climate forcing, but their focus was on potential future deforestation rather than the historicalmore » period.« less
NASA Technical Reports Server (NTRS)
Ribner, Herbert S
1945-01-01
It was realized as early as 1909 that a propeller in yaw develops a side force like that of a fin. In 1917, R. G. Harris expressed this force in terms of the torque coefficient for the unyawed propeller. Of several attempts to express the side force directly in terms of the shape of the blades, however, none has been completely satisfactory. An analysis that incorporates induction effects not adequately covered in previous work and that gives good agreement with experiment over a wide range of operating conditions is presented. The present analysis shows that the fin analogy may be extended to the form of the side-force expression and that the effective fin area may be taken as the projected side area of the propeller.
Blunt force lesions related to the heights of a fall.
Gupta, S M; Chandra, J; Dogra, T D
1982-03-01
Patterns of traumatic injuries due to fall from height certainly have an association with the amount of impact involved. A study of 63 medicolegal autopsies with the history of falls has been carried out during the period January 1974-July 1980. The injuries found were caused either by the direct impact, i.e., at the site of impact, or in a region distant from the site of impact of force as a result of transmitted force or indirect force. An attempt has been made to evaluate, generalize and correlate the characteristic pattern of the injuries to the various parts of the body with respect to various heights of fall. Stress has also been laid on the mechanism of production of these injuries in order to create a composite picture to help determine the most likely traumatic force in falls from height.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Alexandre A.; Pinheiro, Mario J.
In this work, the propulsion force developed in an asymmetric capacitor will be calculated for three different diameters of the ground electrode. The used ion source is a small diameter wire, which generates a positive corona discharge in nitrogen gas directed to the ground electrode. By applying the fluid dynamic and electrostatic theories, all hydrodynamic and electrostatic forces that act on the considered geometries will be computed in an attempt to provide a physical insight on the force mechanism that acts on the asymmetrical capacitors, and also to understand how to increase the efficiency of propulsion.
Interaction of post-stroke voluntary effort and functional neuromuscular electrical stimulation
Makowski, Nathaniel; Knutson, Jayme; Chae, John; Crago, Patrick
2012-01-01
Functional Electrical Stimulation (FES) may be able to augment functional arm and hand movement after stroke. Post-stroke neuroprostheses that incorporate voluntary effort and FES to produce the desired movement need to consider how the forces generated by voluntary effort and FES combine together, even in the same muscle, in order to provide an appropriate level of stimulation to elicit the desired assistive force. The goal of this study was to determine if the force produced by voluntary effort and FES add together independently of effort, or if the increment in force is dependent on the level of voluntary effort. Isometric force matching tasks were performed under different combinations of voluntary effort and electrical stimulation. Participants reached a steady level of force and while attempting to maintain a constant effort level, FES was applied to augment the force. Results indicate that the increment in force produced by FES decreases as the level of initial voluntary effort increases. Potential mechanisms causing the change in force output are proposed, but the relative contribution of each mechanism is unknown. PMID:23516086
32 CFR 516.19 - Injunctive relief.
Code of Federal Regulations, 2011 CFR
2011-07-01
... may attempt to force government action or restraint in important operational matters or pending personnel actions through motions for temporary restraining orders (TRO) or preliminary injunctions (PI). Because these actions can quickly impede military functions, immediate and decisive action must be taken...
32 CFR 516.19 - Injunctive relief.
Code of Federal Regulations, 2010 CFR
2010-07-01
... may attempt to force government action or restraint in important operational matters or pending personnel actions through motions for temporary restraining orders (TRO) or preliminary injunctions (PI). Because these actions can quickly impede military functions, immediate and decisive action must be taken...
What Is a Current Equivalent to Unemployment Rates of the Past?
ERIC Educational Resources Information Center
Antos, Joseph; And Others
1979-01-01
The results of various attempts to quantify how much changes in the labor force, unemployment insurance, and minimum wages have affected unemployment rates are reasonably close; but no total effect on jobless rates can be determined. (BM)
Correlates of Suicidal Ideation and Attempt Among Female Sex Workers in China
HONG, YAN; LI, XIAOMING; FANG, XIAOYI; ZHAO, RAN
2007-01-01
The purpose of this study was to explore the factors associated with suicidal ideation and attempt among female sex workers (FSWs) in China. A cross-sectional survey was administered among 454 FSWs in a rural county of Guangxi, China. About 14% of FSWs had thought of suicide and 8% had attempted suicide in the past 6 months. Multiple logistic regression analyses indicated that those FSWs who were dissatisfied with life, abused alcohol, were deceived or forced into commercial sex, and had stable sexual partners were more likely to report suicidal ideation. Female sex workers who had multiple stable partners, experienced sexual coercion, and worried about an inability to make enough money were more likely to report a suicide attempt. These FSWs who entered commercial sex because of financial needs or who were influenced by the peers were less likely to report a suicide attempt. Our data suggested that the rates of suicidal ideation and attempts were high among FSWs in China, and there were multiple factors associated with their suicidality. Future health education and promotion efforts among FSWs need to take into consideration substance abuse, interpartner conflict, and psychological stress. PMID:17469002
Flying Qualities and Performance Evaluation of the Breguet 941 Turbo-Prop Troop Transport
1964-01-01
evaluation was conducted durin g September, 1963 at the French Air Force Test Center at Mont de Marsan, France . Following fixes made to the airplane as a...techniques used were those which were developed by the contractor and the French Air Force. At the request of the French project pilot, no attempt was made...the French project pilot using the automatic pitch trim change system to compensate for flap retrac- tion from 98’ to 70. Airplane response and
Doctrines of Defeat, La Guerre Revolutionnaire and Counterinsurgency Warfare
1992-12-01
sending in colonial troops to "pacify" the region. The French simply felt a superior force of arms would regain control. As France began to see defeat at...doctrine. The French Army’s isolation from its own society was the result of many reasons. At the end of World War II, France was war weary; it had...strategically defensive by threateni.,. nuclear destruction of an enemy’s force attempting _o invade France or her allies. In theory, all French
1987-06-05
the Corps to subdue Sandino and build the Guardia Nacional de Nicaragua. L Taft, William H., and Bacon, Robert. "Cuban Pacification: Report of William...in Nicaragua, the Marines set out to build a professional apolitical force. The new Guardia Nacional de Nicaragua was to subdue the recalcitrant...Guardians of the Dynasty: A History of the U.S. Created Guardia Nacional de Nicaraoua and the Somoza Family. (Maryknoll, N.Y.: Orbis Books, Inc., 1977
Continuing Themes in Assimilation Through Education.
ERIC Educational Resources Information Center
Strouse, Joan
1987-01-01
Discusses assimilation and adaptation of immigrants in the United States. Summarizes major sociological theories on assimilation. Focuses on schools as instruments of assimilation that attempt to force Anglo-conformity upon students. The refugee student's perceptions of his or her problems and opportunities are discussed. (PS)
Standardization of availability, location and use of safety equipment on urban transit buses
DOT National Transportation Integrated Search
1996-03-01
This document represents the conclusion of a project undertaken to identify guidelines which will correct the problems encountered by rescue forces while attempting to gain entry to, shut down, and evacuate urban transit buses involved in an emergenc...
Compression force on the upper jaw during neonatal intubation: mannequin study.
Doreswamy, Srinivasa Murthy; Almannaei, Khaled; Fusch, Chris; Shivananda, Sandesh
2015-03-01
Neonatal intubation is a technically challenging procedure, and pressure-related injuries to surrounding structures have been reported. The primary objective of this study was to determine the pressure exerted on the upper jaw during tracheal intubation using a neonatal mannequin. Multidisciplinary care providers working at a neonatal intensive care unit were requested to intubate a neonatal mannequin using the standard laryngoscope and 3.0-mm (internal diameter) endotracheal tube. Compression force exerted was measured by using pressure-sensitive film taped on the upper jaw before every intubation attempt. Pressure, area under pressure and time taken to intubate were compared between the different types of health-care professionals. Thirty care providers intubated the mannequin three times each. Pressure impressions were observed on the developer film after every intubation attempt (n = 90). The mean pressure exerted during intubation across all health-care providers was 568 kPa (SD 78). The mean area placed under pressure was 142 mm(2) (SD 45), and the mean time taken for intubation was 14.7 s (SD 4.3). There was no difference in pressure exerted on the upper jaw between frequent and less frequent intubators. It was found that pressure greater than 400 kPa was inadvertently applied on the upper jaw during neonatal intubation, far exceeding the 250 kPa shown to cause tissue injury in animal models. The upper jaw is exposed to a significant compression force during intubation. Although such exposure is brief, it has the potential to cause tissue injury. Contact of the laryngoscope blade with the upper jaw occurred in all intubation attempts with the currently used design of laryngoscope. © 2014 The Authors. Journal of Paediatrics and Child Health © 2014 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Force Rendering and its Evaluation of a Friction-Based Walking Sensation Display for a Seated User.
Kato, Ginga; Kuroda, Yoshihiro; Kiyokawa, Kiyoshi; Takemura, Haruo
2018-04-01
Most existing locomotion devices that represent the sensation of walking target a user who is actually performing a walking motion. Here, we attempted to represent the walking sensation, especially a kinesthetic sensation and advancing feeling (the sense of moving forward) while the user remains seated. To represent the walking sensation using a relatively simple device, we focused on the force rendering and its evaluation of the longitudinal friction force applied on the sole during walking. Based on the measurement of the friction force applied on the sole during actual walking, we developed a novel friction force display that can present the friction force without the influence of body weight. Using performance evaluation testing, we found that the proposed method can stably and rapidly display friction force. Also, we developed a virtual reality (VR) walk-through system that is able to present the friction force through the proposed device according to the avatar's walking motion in a virtual world. By evaluating the realism, we found that the proposed device can represent a more realistic advancing feeling than vibration feedback.
A statistical approach to nuclear fuel design and performance
NASA Astrophysics Data System (ADS)
Cunning, Travis Andrew
As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.
2003-02-13
KENNEDY SPACE CENTER, FLA. -- Members of the reconstruction team check out the Columbia debris inside the RLV Hangar. The debris was shipped from Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident, workers will attempt to reconstruct the orbiter inside the RLV.
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.
2016-12-01
Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.
Allosteric effects of gold nanoparticles on human serum albumin.
Shao, Qing; Hall, Carol K
2017-01-07
The ability of nanoparticles to alter protein structure and dynamics plays an important role in their medical and biological applications. We investigate allosteric effects of gold nanoparticles on human serum albumin protein using molecular simulations. The extent to which bound nanoparticles influence the structure and dynamics of residues distant from the binding site is analyzed. The root mean square deviation, root mean square fluctuation and variation in the secondary structure of individual residues on a human serum albumin protein are calculated for four protein-gold nanoparticle binding complexes. The complexes are identified in a brute-force search process using an implicit-solvent coarse-grained model for proteins and nanoparticles. They are then converted to atomic resolution and their structural and dynamic properties are investigated using explicit-solvent atomistic molecular dynamics simulations. The results show that even though the albumin protein remains in a folded structure, the presence of a gold nanoparticle can cause more than 50% of the residues to decrease their flexibility significantly, and approximately 10% of the residues to change their secondary structure. These affected residues are distributed on the whole protein, even on regions that are distant from the nanoparticle. We analyze the changes in structure and flexibility of amino acid residues on a variety of binding sites on albumin and confirm that nanoparticles could allosterically affect the ability of albumin to bind fatty acids, thyroxin and metals. Our simulations suggest that allosteric effects must be considered when designing and deploying nanoparticles in medical and biological applications that depend on protein-nanoparticle interactions.
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam
2014-01-01
In this paper, a unique system level spacecraft design optimization will be presented. A Genetic Algorithm is used to design the global pattern of the reinforcing structure, while a gradient routine is used to adequately stiffen the sub-structure. The system level structural design includes determining the optimal physical location (and number) of reinforcing beams of a lunar pallet lander deck structure. Design of the substructure includes determining placement of secondary stiffeners and the number of rivets required for assembly.. In this optimization, several considerations are taken into account. The primary objective was to raise the primary natural frequencies of the structure such that the Pallet Lander primary structure does not significantly couple with the launch vehicle. A secondary objective is to determine how to properly stiffen the reinforcing beams so that the beam web resists the shear buckling load imparted by the spacecraft components mounted to the pallet lander deck during launch and landing. A third objective is that the calculated stress does not exceed the allowable strength of the material. These design requirements must be met while, minimizing the overall mass of the spacecraft. The final paper will discuss how the optimization was implemented as well as the results. While driven by optimization algorithms, the primary purpose of this effort was to demonstrate the capability of genetic algorithms to enable design automation in the preliminary design cycle. By developing a routine that can automatically generate designs through the use of Finite Element Analysis, considerable design efficiencies, both in time and overall product, can be obtained over more traditional brute force design methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bin, E-mail: bins@ieee.org
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less
Known-plaintext attack on the double phase encoding and its implementation with parallel hardware
NASA Astrophysics Data System (ADS)
Wei, Hengzheng; Peng, Xiang; Liu, Haitao; Feng, Songlin; Gao, Bruce Z.
2008-03-01
A known-plaintext attack on the double phase encryption scheme implemented with parallel hardware is presented. The double random phase encoding (DRPE) is one of the most representative optical cryptosystems developed in mid of 90's and derives quite a few variants since then. Although the DRPE encryption system has a strong power resisting to a brute-force attack, the inherent architecture of DRPE leaves a hidden trouble due to its linearity nature. Recently the real security strength of this opto-cryptosystem has been doubted and analyzed from the cryptanalysis point of view. In this presentation, we demonstrate that the optical cryptosystems based on DRPE architecture are vulnerable to known-plain text attack. With this attack the two encryption keys in the DRPE can be accessed with the help of the phase retrieval technique. In our approach, we adopt hybrid input-output algorithm (HIO) to recover the random phase key in the object domain and then infer the key in frequency domain. Only a plaintext-ciphertext pair is sufficient to create vulnerability. Moreover this attack does not need to select particular plaintext. The phase retrieval technique based on HIO is an iterative process performing Fourier transforms, so it fits very much into the hardware implementation of the digital signal processor (DSP). We make use of the high performance DSP to accomplish the known-plaintext attack. Compared with the software implementation, the speed of the hardware implementation is much fast. The performance of this DSP-based cryptanalysis system is also evaluated.
NASA Astrophysics Data System (ADS)
Zermeño, Víctor M. R.; Habelok, Krzysztof; Stępień, Mariusz; Grilli, Francesco
2017-03-01
The estimation of the critical current (I c) and AC losses of high-temperature superconductor devices through modeling and simulation requires the knowledge of the critical current density (J c) of the superconducting material. This J c is in general not constant and depends both on the magnitude (B loc) and the direction (θ, relative to the tape) of the local magnetic flux density. In principle, J c(B loc,θ) can be obtained from the experimentally measured critical current I c(B a,θ), where B a is the magnitude of the applied magnetic field. However, for applications where the superconducting materials experience a local field that is close to the self-field of an isolated conductor, obtaining J c(B loc,θ) from I c(B a,θ) is not a trivial task. It is necessary to solve an inverse problem to correct for the contribution derived from the self-field. The methods presented in the literature comprise a series of approaches dealing with different degrees of mathematical regularization to fit the parameters of preconceived nonlinear formulas by means of brute force or optimization methods. In this contribution, we present a parameter-free method that provides excellent reproduction of experimental data and requires no human interaction or preconception of the J c dependence with respect to the magnetic field. In particular, it allows going from the experimental data to a ready-to-run J c(B loc,θ) model in a few minutes.
Detecting and Cataloging Global Explosive Volcanism Using the IMS Infrasound Network
NASA Astrophysics Data System (ADS)
Matoza, R. S.; Green, D. N.; LE Pichon, A.; Fee, D.; Shearer, P. M.; Mialle, P.; Ceranna, L.
2015-12-01
Explosive volcanic eruptions are among the most powerful sources of infrasound observed on earth, with recordings routinely made at ranges of hundreds to thousands of kilometers. These eruptions can also inject large volumes of ash into heavily travelled aviation corridors, thus posing a significant societal and economic hazard. Detecting and counting the global occurrence of explosive volcanism helps with progress toward several goals in earth sciences and has direct applications in volcanic hazard mitigation. This project aims to build a quantitative catalog of global explosive volcanic activity using the International Monitoring System (IMS) infrasound network. We are developing methodologies to search systematically through IMS infrasound array detection bulletins to identify signals of volcanic origin. We combine infrasound signal association and source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent infrasound signals in a global grid. When volcanic signals are identified, we extract metrics such as location, origin time, acoustic intensity, signal duration, and frequency content, compiling the results into a catalog. We are testing and validating our method on several well-known case studies, including the 2009 eruption of Sarychev Peak, Kuriles, the 2010 eruption of Eyjafjallajökull, Iceland, and the 2015 eruption of Calbuco, Chile. This work represents a step toward the goal of integrating IMS data products into global volcanic eruption early warning and notification systems. Additionally, a better characterization of volcanic signal detection helps improve understanding of operational event detection, discrimination, and association capabilities of the IMS network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, A; Farrell, T; Diamond, K
2014-08-15
Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
Bayesian Model Selection in Geophysics: The evidence
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2016-12-01
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.
Violence against women during the Liberian civil conflict.
Swiss, S; Jennings, P J; Aryee, G V; Brown, G H; Jappah-Samukai, R M; Kamara, M S; Schaack, R D; Turay-Kanneh, R S
1998-02-25
Civilians were often the casualties of fighting during the recent Liberian civil conflict. Liberian health care workers played a crucial role in documenting violence against women by soldiers and fighters during the war. To document women's experiences of violence, including rape and sexual coercion, from a soldier or fighter during 5 years of the Liberian civil war from 1989 through 1994. Interview and survey. High schools, markets, displaced persons camps, and urban communities in Monrovia, Liberia, in 1994. A random sample of 205 women and girls between the ages of 15 and 70 years (88% participation rate). One hundred (49%) of 205 participants reported experiencing at least 1 act of physical or sexual violence by a soldier or fighter. Survey participants reported being beaten, tied up, or detained in a room under armed guard (17%); strip-searched 1 or more times (32%); and raped, subjected to attempted rape, or sexually coerced (15%). Women who were accused of belonging to a particular ethnic group or fighting faction or who were forced to cook for a soldier or fighter were at increased risk for physical and sexual violence. Of the 106 women and girls accused of belonging to an ethnic group or faction, 65 (61%) reported that they were beaten, locked up, strip-searched, or subjected to attempted rape, compared with 27 (27%) of the 99 women who were not accused (P< or =.02, .07, .001, and .06, respectively). Women and girls who were forced to cook for a soldier or fighter were more likely to report experiencing rape, attempted rape, or sexual coercion than those who were not forced to cook (55% vs 10%; P< or =.001, .06, and .001, respectively). Young women (those younger than 25 years) were more likely than women 25 years or older to report experiencing attempted rape and sexual coercion (18% vs 4%, P=.02 and .04, respectively). This collaborative research allowed Liberian women to document wartime violence against women in their own communities and to develop a unique program to address violence against women in Liberia.
Burk, Thad; Edmondson, Andrea Hamor; Whitehead, Tyler; Smith, Barbara
2014-06-01
The purpose of this study was to examine the association between exposure to bullying and other forms of violence and suicide risk among public high school students in Oklahoma. Data from the 2009 and 2011 Oklahoma Youth Risk Behavior Surveys were used for this analysis and were representative of public school students in grades 9-12 in Oklahoma. Students who were bullied, threatened or injured by someone with a weapon, physically hurt by their partner, or had ever been forced to have sex, were twice as likely as students who had not experienced victimization to have experienced persistent sadness, considered attempting suicide, made a plan to attempt suicide, and attempted suicide. The results of this study indicate that being a victim of bullying or other forms of violence significantly increases the likelihood for experiencing signs and symptoms of depression, suicidal thoughts, suicidal plans, or suicidal attempts.
Steinmann, Michael
2013-01-01
Johann Christian Reil's (1759-1813) importance lies in his theoretical approach to medicine. Following Kant in his early work, he attempts to combine medical experience with an underlying conceptual structure. This attempt is directed against both the chaotic empiricism of traditional medicine and speculative theories such as vitalism. The paper starts from his early reflections on the concept of a life force, which he interprets in the way of a non-reductive materialism. In the following, the basic outlines of his Theory of Fever will be shown. The Theory is a systematic attempt at finding a new foundation for diagnosis and therapy on the basis of the concept of fever, which is understood as modification of vital processes. The paper ends with a discussion of his later work, which has remained controversial so far. It shows that the combination of practical empiricism and scientific theory remained rather unstable in this early phase of the development of modern medicine.
Production of isometric forces during sustained acceleration.
Sand, D P; Girgenrath, M; Bock, O; Pongratz, H
2003-06-01
The operation of high-performance aircraft requires pilots to apply finely graded forces on controls. Since they are often exposed to high levels of acceleration in flight, we investigated to what extent this ability is degraded in such an environment. Twelve healthy non-pilot volunteers were seated in the gondola of a centrifuge and their performance was tested at normal gravity (1 G) and while exposed to sustained forces of 1.5 G and 3 G oriented from head to foot (+Gz). Using an isometric joystick, they attempted to produce force vectors with specific lengths and directions commanded in random order by a visual display. Acceleration had substantial effects on the magnitude of produced force. Compared with 1 G, maximum produced force was about 2 N higher at 1.5 G and about 10 N higher at 3 G. The size of this effect was constant across the different magnitudes, but varied with the direction of the prescribed force. Acceleration degrades control of force production. This finding may indicate that the motor system misinterprets the unusual gravitoinertial environment and/or that proprioceptive feedback is degraded due to increased muscle tone. The production of excessive isometric force could affect the safe operation of high-performance aircraft.
Eye Movements in Implicit Artificial Grammar Learning
ERIC Educational Resources Information Center
Silva, Susana; Inácio, Filomena; Folia, Vasiliki; Petersson, Karl Magnus
2017-01-01
Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased…
Student Involvement: A Bridge to Total Education.
ERIC Educational Resources Information Center
North Carolina State Dept. of Public Instruction, Raleigh.
This document, prepared by students involved in the Task Force on Student Involvement program, provides guidelines for administrators who are attempting to enhance constructive student participation in the total educational program. An outline of specific recommendations for dealing with high school unrest is followed by general recommendations…
ERIC Educational Resources Information Center
Ray, Nancy L.
This paper presents basic principles and theories of motivation, attempts to provide a better understanding of the concept, and explores the role motivation plays in learning. Basic theories of motivation are reviewed including: Freud's belief in motivation by the id, unconscious forces, and sexual stages; Jung and Adler's belief that people are…
ERIC Educational Resources Information Center
Bockrath, Joseph
1976-01-01
The University of Delaware Marine Studies has implemented courses in coastal zone law and policy and maritime law. The courses attempt to integrate the scientist's or engineer's work with public policy formation. The program emphasizes historical and current issues and the economic, cultural, and political forces operating in decision-making…
The History of Recent Farm Legislation: Implications for Farm Families.
ERIC Educational Resources Information Center
Little, Linda F.; And Others
1987-01-01
Presents history of modern farm legislation and looks at recent legislation and tax policies. Asserts that family scientists attempting to help farm families can benefit from understanding legislation and policies. Discusses family intervention strategies in the larger context of macroeconomic and political forces. (Author/NB)
2011-10-20
the Sinaloa Cartel, the Gulf Cartel, Los Zetas, Juarez Cartel, and La Familia Michoacana. Profits are huge, and the enterprise is sprawling, reaching...originate in Mexico, but migrated there in force after Colombia cracked down on its own drug lords. As Mexican authorities attempt to put on the squeeze
2003-02-13
KENNEDY SPACE CENTER, FLA. -- The reconstruction team record and bag some of the Columbia debris inside the RLV Hangar. The debris was shipped from Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident, workers will attempt to reconstruct the orbiter inside the RLV.
Deterrents to Participation in Adult Education. Overview. ERIC Digest No. 59.
ERIC Educational Resources Information Center
Kerka, Sandra
Changing socioeconomic, cultural, and demographic forces have caused educational nonparticipation among adults to be treated as a social issue. Recent research has attempted to combine dispositional, situational, and environmental factors into composite models of participation. These models have suggested the following categories of deterrence…
Work at older ages in Japan: variation by gender and employment status.
Raymo, James M; Liang, Jersey; Sugisawa, Hidehiro; Kobayashi, Erika; Sugihara, Yoko
2004-05-01
This study describes the correlates of labor force participation among Japanese men and women aged 60-85 and examines differences by gender and employment status. Using four waves of data collected from a national sample of older Japanese between 1990 and 1999, we estimate multinomial logistic regression models for three measures of labor force participation (current labor force status, labor force exit, and labor force re-entry) as a function of individual and family characteristics measured 3 years earlier. Labor force participation is significantly associated with socioeconomic status, longest occupation, and family structure. The strength and nature of these relationships differ markedly for men and women and for wage employment and self-employment. The emphasis on life course experiences and work-family interdependence characterizing recent research on retirement in the United States is clearly relevant in Japan as well. To better understand later-life labor force participation in Japan, subsequent research should incorporate more direct measures of life course experiences and family relationships and attempt to make explicit cross-national comparisons of these relationships.
2015-02-08
Gaseous oxygen vents away from the SpaceX Falcon 9 rocket standing at Space Launch Complex 40 at Florida’s Cape Canaveral Air Force Station during the first launch attempt for NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR. The mission is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky
2015-02-08
The SpaceX Falcon 9 rocket set to launch NOAA’s Deep Space Climate Observatory spacecraft, or DSCOVR, stands at Space Launch Complex 40 at Florida’s Cape Canaveral Air Force Station during the mission’s first launch attempt. The mission is a partnership between NOAA, NASA and the U.S. Air Force. DSCOVR will maintain the nation's real-time solar wind monitoring capabilities which are critical to the accuracy and lead time of NOAA's space weather alerts and forecasts. To learn more about DSCOVR, visit http://www.nesdis.noaa.gov/DSCOVR. Photo credit: NASA/Ben Smegelsky
Monitoring of Ritz modal generation
NASA Technical Reports Server (NTRS)
Chargin, Mladen; Butler, Thomas G.
1990-01-01
A scheme is proposed to monitor the adequacy of a set of Ritz modes to represent a solution by comparing the quantity generated with certain properties involving the forcing function. In so doing an attempt was made to keep this algorithm lean and efficient, so that it will be economical to apply. Using this monitoring scheme during Ritz Mode generation will automatically ensure that the k Ritz modes theta k that are generated are adequate to represent both the spatial and temporal behavior of the structure when forced under the given transient condition defined by F(s,t).
Attitudes toward working mothers: accommodating the needs of mothers in the work force.
Albright, A
1992-10-01
More women, including mothers, are part of the work force than ever before. In the workplace, barriers often exist that restrict promotion and advancement of mothers. Mothers often are penalized in attempting to meet the demands of parent and worker roles. Parenting practices have been considered primarily the domain of mothers. However, nurturing may be done effectively by fathers or other motivated adults. Policies of employers must change to accommodate needs of families. Examples of supportive practices may include flexible working hours, parental leave, and on-site child care.
Pseudo-Duane's retraction syndrome.
Duane, T D; Schatz, N J; Caputo, A R
1976-01-01
Five patients presented with signs that were similar to but opposite from Duane's retraction syndrome. Most had a history of orbital trauma. On attempted abduction a narrowing of the palpebral fissure and retraction of the globe was observed. Diplopia with lateral gaze was present. Roentgenograms (polytomograms) showed involvement of the medial orbital wall. Forced ductuin tests were positive. Surgical repair of the fracture and release of the entrapped muscle as determined by forced duction tests and by postoperative motility led to successful results. Images FIGURE 1 A FIGURE 1 B FIGURE 2 FIGURE 3 PMID:867622
1-D blood flow modelling in a running human body.
Szabó, Viktor; Halász, Gábor
2017-07-01
In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.
Recent Advances in Our Understanding of Nuclear Forces
NASA Astrophysics Data System (ADS)
Machleidt, Ruprecht
2007-05-01
The attempts to find the right (underlying) theory for the nuclear force have a long and stimulating history. Already in 1953, Hans Bethe stated that ``more man-hours have been given to this problem than to any other scientific question in the history of mankind.'' In search for the nature of the nuclear force, the idea of sub-nuclear particles was created which, eventually, generated the field of particle physics. I will review this productive history of hope, error, and desperation. Finally, I will discuss recent ideas which apply the concept of an effective field theory to low-energy QCD. There are indications that this concept may provide the right framework to properly understand the nuclear force. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B2.1
Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review
NASA Astrophysics Data System (ADS)
Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal
2017-08-01
Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.
The asymmetrical force of persuasive knowledge across the positive-negative divide.
Nordmo, Mads; Selart, Marcus
2015-01-01
In two experimental studies we explore to what extent the general effects of positive and negative framing also apply to positive and negative persuasion. Our results reveal that negative persuasion induces substantially higher levels of skepticism and awareness of being subjected to a persuasion attempt. Furthermore, we demonstrate that in positive persuasion, more claims lead to stronger persuasion, while in negative persuasion, the numerosity of claims carries no significant effect. We interpret this finding along the lines of a satiety-model of persuasion. Finally, using diluted, or low strength claims in a persuasion attempt, we reveal a significant interaction between dispositional reactance and dilution of claims on persuasion knowledge. The interaction states that diluted claims increase the awareness of being subjected to a persuasion attempt, but only for those with a high dispositional level of reactance.
The asymmetrical force of persuasive knowledge across the positive–negative divide
Nordmo, Mads; Selart, Marcus
2015-01-01
In two experimental studies we explore to what extent the general effects of positive and negative framing also apply to positive and negative persuasion. Our results reveal that negative persuasion induces substantially higher levels of skepticism and awareness of being subjected to a persuasion attempt. Furthermore, we demonstrate that in positive persuasion, more claims lead to stronger persuasion, while in negative persuasion, the numerosity of claims carries no significant effect. We interpret this finding along the lines of a satiety-model of persuasion. Finally, using diluted, or low strength claims in a persuasion attempt, we reveal a significant interaction between dispositional reactance and dilution of claims on persuasion knowledge. The interaction states that diluted claims increase the awareness of being subjected to a persuasion attempt, but only for those with a high dispositional level of reactance. PMID:26388821
Automobile Engine Development, Task Force Assessment, Preliminary Report.
ERIC Educational Resources Information Center
Caretto, L. S.; And Others
This report presents a comprehensive survey of current knowledge and ongoing research and development projects in the area of vehicular emissions and control. Information provided attempts to answer the questions: how can proposed standards be met with existing technology and what additional research would be required to obtain desired control…
The U.S. Army and Doctrine for Weapons of Mass Destruction: Consequence Management Operations.
1999-06-04
Forces within section 7 Competency of witnesses 23 Construction Generally 1 With other laws 2 Estoppel 19 Exclusionary rule 22 Execution of laws 8...19. Estoppel Despite ruling in previous case that this section precluded prosecutions under section 231 of this title prohibiting attempts to
Reproductive cell separation: A concept
NASA Technical Reports Server (NTRS)
Cutaia, A. J.
1973-01-01
Attempt has been made to separate mammalian male (Y) bearing sperm from female (X) bearing sperm. Both types of sperm are very dependent on gravity for their direction of movement. Proposed concept suggests electrophoretic force of suitable magnitude and direction may be effective means of separating X and Y sperm under zero gravity.
Relational Identities of Students, Families, and Educators: Shaping Educational Pathways
ERIC Educational Resources Information Center
March, Evangelia; Gaffney, Janet S.
2010-01-01
This retrospective study sketched the educational pathways of two seniors attending an alternative high school in an attempt to discern how relational identities of students, families, and educators are defining forces of such pathways. Cumulative school records and special education files were triangulated with interviews of the students, their…
ERIC Educational Resources Information Center
Sawchuk, Stephen; Sparks, Sarah D.; Cavanagh, Sean; Samuels, Christina A.
2011-01-01
A mantra in recent years has been to blame the teachers' unions for many of the problems that beset public education. Americans only need look at Wisconsin, where the governor and lawmakers pushed through legislation curtailing the collective bargaining rights of teachers and other public employees. This special report examines the attempts by a…
Observations on the Air War in Syria
2013-04-01
comfortable with a trainer aircraft. In January 2012, the Syrian air force attempted to buy 40 Yak -130 trainers from Russia, but in July 2012...election-result-delay-coup-live. 32. “Russia Will Not Deliver Yak -130 Fighter Jets to Syria,” Airforce-technology.com, 9 July 2012, http://www.airforce
ERIC Educational Resources Information Center
O'Rand, Angela M.
1996-01-01
Attempts to explain how institutional mechanisms such as labor markets and pensions stratify the availability of resources and rewards, and interact with life course processes related to labor force history and job mobility to produce income and wealth inequality among the elderly. (SNR)
Maine Leading Initiative for Multistate Tech. Buys
ERIC Educational Resources Information Center
Cavanagh, Sean
2013-01-01
A group of states has joined forces to arrange the purchase of an unusually comprehensive set of educational-technology devices and services, in a compact that could foreshadow other cooperative efforts by state and local governments attempting to turn the digital-procurement process to their advantage. The initial partners in the multistate…
International Military Cooperation: From Concepts to Constructs
ERIC Educational Resources Information Center
D'Orazio, Vito
2013-01-01
International cooperation on issues of security is a central concept in many theoretical debates in international relations. This dissertation is an attempt to lay the foundation for measuring military cooperation and understanding the forces brought forth through its expansion. The central notion is that the set of policies related to military…
The Original Americans: U.S. Indians.
ERIC Educational Resources Information Center
Wilson, James
Confusion, fear, maladjustment, apathy and loss of self-respect are only some of the effects of the historically contemptuous and disparaging treatment of Native Americans by white people. Beginning with the original European colonization and continuing through often forceful attempts at absorption into the U.S. society as a whole, such treatment…
ERIC Educational Resources Information Center
Follert, Vincent F.; Benoit, William L.
The recent innovation of adapting the debate to the judge's preferred philosophy appears to have been supplanted by a converse trend: advocates now attempt to force the judge to adopt the paradigm dictated by the strategies of the debate round. The existence of such a widespread dispute over the appropriate decision making system in debate…
Development of Air Force aerial spray night operations: High altitude swath characterization
USDA-ARS?s Scientific Manuscript database
Multiple trials were conducted from 2006 to 2014 in an attempt to validate aerial spray efficacy at altitudes conducive to night spray operations using night vision goggles (NVG). Higher altitude application of pesticide (>400 feet above ground level [AGL]) suggested that effective vector control mi...
2003-06-09
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the launch tower begins to roll back from the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload in preparation for a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-09
KENNEDY SPACE CENTER, FLA. - The launch tower on Launch Complex 17-A, Cape Canaveral Air Force Station, clears the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload in preparation for a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-09
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload are in the clear after tower rollback in preparation for a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-10
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the launch tower begins to roll back from the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload in preparation for another launch attempt. The first two attempts were postponed due to weather concerns. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-09
KENNEDY SPACE CENTER, FLA. - The Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload is viewed from under the launch tower as it moves away on Launch Complex 17-A, Cape Canaveral Air Force Station. This will be a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-09
KENNEDY SPACE CENTER, FLA. - The launch tower (right) on Launch Complex 17-A, Cape Canaveral Air Force Station, has been rolled back from the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload (left) in preparation for a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-09
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload waits for rollback of the launch tower in preparation for a second attempt at launch. The first attempt on June 8, 2003, was scrubbed due to bad weather in the vicinity. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
2003-06-10
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, the launch tower rolls back from the Boeing Delta II rocket and its Mars Exploration Rover (MER-A) payload in preparation for another launch attempt. The first two attempts, June 8 and June 9, were postponed due to weather concerns. MER-A is the first of two rovers being launched to Mars. When the two rovers arrive at Mars in 2004, they will bounce to airbag-cushioned landings at sites offering a balance of favorable conditions for safe landings and interesting science. The rovers see sharper images, can explore farther and examine rocks better than anything that has ever landed on Mars. The designated site for MER-A mission is Gusev Crater, which appears to have been a crater lake. The second rover, MER-B, is scheduled to launch June 25.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Design of dry-friction dampers for turbine blades
NASA Technical Reports Server (NTRS)
Ancona, W.; Dowell, E. H.
1983-01-01
A study is conducted of turbine blade forced response, where the blade has been modeled as a cantilever beam with a generally dry friction damper attached, and where the minimization of blade root strain as the excitation frequency is varied over a given range is the criterion for the evaluation of the effectiveness of the dry friction damper. Attempts are made to determine the location of the damper configuration best satisfying the design criterion, together with the best damping force (assuming that the damper location has been fixed). Results suggest that there need not be an optimal value for the damping force, or an optimal location for the dry friction damper, although there is a range of values which should be avoided.
Lin, Yen-Ting; Kuo, Chia-Hua; Hwang, Ing-Shiou
2014-01-01
Continuous force output containing numerous intermittent force pulses is not completely smooth. By characterizing force fluctuation properties and force pulse metrics, this study investigated adaptive changes in trajectory control, both force-generating capacity and force fluctuations, as fatigue progresses. Sixteen healthy subjects (20–24 years old) completed rhythmic isometric gripping with the non-dominant hand to volitional failure. Before and immediately following the fatigue intervention, we measured the gripping force to couple a 0.5 Hz sinusoidal target in the range of 50–100% maximal voluntary contraction. Dynamic force output was off-line decomposed into 1) an ideal force trajectory spectrally identical to the target rate; and 2) a force pulse trace pertaining to force fluctuations and error-correction attempts. The amplitude of ideal force trajectory regarding to force-generating capacity was more suppressed than that of the force pulse trace with increasing fatigue, which also shifted the force pulse trace to lower frequency bands. Multi-scale entropy analysis revealed that the complexity of the force pulse trace at high time scales increased with fatigue, contrary to the decrease in complexity of the force pulse trace at low time scales. Statistical properties of individual force pulses in the spatial and temporal domains varied with muscular fatigue, concurrent with marked suppression of gamma muscular oscillations (40–60 Hz) in the post-fatigue test. In conclusion, this study first reveals that muscular fatigue impairs the amplitude modulation of force pattern generation more than it affects the amplitude responsiveness of fine-tuning a force trajectory. Besides, motor fatigue results disadvantageously in enhancement of motor noises, simplification of short-term force-tuning strategy, and slow responsiveness to force errors, pertaining to dimensional changes in force fluctuations, scaling properties of force pulse, and muscular oscillation. PMID:24465605
1956-11-21
The X-2, initially an Air Force program, was scheduled to be transferred to the civilian National Advisory Committee for Aeronautics (NACA) for scientific research. The Air Force delayed turning the aircraft over to the NACA in the hope of attaining Mach 3 in the airplane. The service requested and received a two-month extension to qualify another Air Force test pilot, Capt. Miburn "Mel" Apt, in the X-2 and attempt to exceed Mach 3. After several ground briefings in the simulator, Apt (with no previous rocket plane experience) made his flight on 27 September 1956. Apt raced away from the B-50 under full power, quickly outdistancing the F-100 chase planes. At high altitude, he nosed over, accelerating rapidly. The X-2 reached Mach 3.2 (2,094 mph) at 65,000 feet. Apt became the first man to fly more than three times the speed of sound. Still above Mach 3, he began an abrupt turn back to Edwards. This maneuver proved fatal as the X-2 began a series of diverging rolls and tumbled out of control. Apt tried to regain control of the aircraft. Unable to do so, Apt separated the escape capsule. Too late, he attempted to bail out and was killed when the capsule impacted on the Edwards bombing range. The rest of the X-2 crashed five miles away. The wreckage of the X-2 rocket plane was later taken to NACA's High Speed Flight Station for analysis following the crash.
Shelef, Leah; Kaminsky, Dan; Carmon, Meytal; Kedem, Ron; Bonne, Omer; Mann, J John; Fruchter, Eyal
2015-11-01
A major risk factor for suicide is suicide attempts. The aim of the present study was to assess risk factors for nonfatal suicide attempts. Methods The study's cohort consisted of 246,814 soldiers who were divided into two groups: soldiers who made a suicide attempt (n=2310; 0.9%) and a control group of soldiers who did not (n=244,504; 99.1%). Socio-demographic and personal characteristics as well as psychiatric diagnoses were compared. Results The strongest risk factors for suicide attempt were serving less than 12 months (RR=7.09) and a history of unauthorized absence from service (RR=5.68). Moderate risk factors were low socioeconomic status (RR=2.17), psychiatric diagnoses at induction (RR=1.94), non-Jewish religion (RR=1.92), low intellectual rating score (RR=1.84), serving in non-combat unit (RR=1.72) and being born in the former Soviet Union (RR=1.61). A weak association was found between male gender and suicide attempt (RR=1.36). Soldiers who met more frequently with a primary care physician (PCP) had a higher risk for suicide attempt, as opposed to a mental health professional (MHCP), where frequent meetings were found to be a protective factor (P<0.0001). The psychiatric diagnoses associated with a suicide attempt were a cluster B personality disorder (RR=3.00), eating disorders (RR=2.78), mood disorders (RR=2.71) and adjustment disorders (RR=2.26). Mild suicidal behavior constitutes a much larger proportion than among civilians and may have secondary gain thus distorting the suicidal behavior data. Training primary care physicians as gatekeepers and improved monitoring, may reduce the rate of suicide attempts. Copyright © 2015 Elsevier B.V. All rights reserved.
Fishing decisions under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, C.G.
1982-02-01
The drilling manager often is forced by an extended fishing operation to choose between the known costs incurred with abandonment of retrieval attempts and the unknown costs of continuing fishing operations. The successful manager makes the decision that costs the company the least money. Continuing fishing operations beyond some economic limit is failure, even if the fish is retrieved and that portion of the hole saved, because more money has been spent in the fishing attempt than would have been spent by not fishing. The strategy is to minimize losses. This analysis closely follows the theory of utility developed bymore » J. von Neuman and O. Morgenstern. 1 ref.« less
NASA Technical Reports Server (NTRS)
Badler, N. I.; Lee, P.; Wong, S.
1985-01-01
Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.
Policy issues inherent in advanced technology development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumann, P.D.
1994-12-31
In the development of advanced technologies, there are several forces which are involved in the success of the development of those technologies. In the overall development of new technologies, a sufficient number of these forces must be present and working in order to have a successful opportunity at developing, introducing and integrating into the marketplace a new technology. This paper discusses some of these forces and how they enter into the equation for success in advanced technology research, development, demonstration, commercialization and deployment. This paper limits itself to programs which are generally governmental funded, which in essence represent most ofmore » the technology development efforts that provide defense, energy and environmental technological products. Along with the identification of these forces are some suggestions as to how changes may be brought about to better ensure success in a long term to attempt to minimize time and financial losses.« less
Long Term Uncertainty Investigations of 1 MN Force Calibration Machine at NPL, India (NPLI)
NASA Astrophysics Data System (ADS)
Kumar, Rajesh; Kumar, Harish; Kumar, Anil; Vikram
2012-01-01
The present paper is an attempt to study the long term uncertainty of 1 MN hydraulic multiplication system (HMS) force calibration machine (FCM) at the National Physical Laboratory, India (NPLI), which is used for calibration of the force measuring instruments in the range of 100 kN - 1 MN. The 1 MN HMS FCM was installed at NPLI in 1993 and was built on the principle of hydraulic amplifications of dead weights. The best measurement capability (BMC) of the machine is ± 0.025% (
A Two-Step Integrated Theory of Everything (TOE)
NASA Astrophysics Data System (ADS)
Colella, Antonio
2017-01-01
Two opposing TOE visions are my Two-Step (physics/math) and Hawking's single math step. My Two-Step should replace the single step because of the latter's near zero results after a century of attempts. My physics step had 3 goals. First ``Everything'' was defined as 20 interrelated amplified theories (e.g. string, Higgs forces, spontaneous symmetry breaking, particle decays, dark matter, dark energy, stellar black holes) and their intimate physical interrelationships. Amplifications of Higgs forces theory (e.g. matter particles and their associated Higgs forces were one and inseparable, spontaneous symmetry breaking was bidirectional and caused by high temperatures not Higgs forces, and sum of 8 Higgs forces of 8 permanent matter particles was dark energy) were key to my Two-Step TOE. The second goal answered all outstanding physics questions: what were Higgs forces, dark energy, dark matter, stellar black holes, our universe's creation, etc.? The third goal provided correct inputs for the two part second math step, an E8 Lie algebra for particles and an N-body cosmology simulation (work in progress). Scientific advancement occurs only if the two opposing TOEs are openly discussed/debated.
Felicita, A Sumathi
2017-01-01
The aim of the present study was to clarify the biomechanics of en-masse retraction of the upper anterior teeth and attempt to quantify the different forces and moments generated using mini-implants and to calculate the amount of applied force optimal for en-masse intrusion and retraction using mini-implants. The optimum force required for en-masse intrusion and retraction can be calculated by using simple mathematical formulae. Depending on the position of the mini-implant and the relationship of the attachment to the center of resistance of the anterior segment, different clinical outcomes are encountered. Using certain mathematical formulae, accurate measurements of the magnitude of force and moment generated on the teeth can be calculated for each clinical outcome. Optimum force for en-masse intrusion and retraction of maxillary anterior teeth is 212 grams per side. Force applied at an angle of 5o to 16o from the occlusal plane produce intrusive and retraction force components that are within the physiologic limit. Different clinical outcomes are encountered depending on the position of the mini-implant and the length of the attachment. It is possible to calculate the forces and moments generated for any given magnitude of applied force. The orthodontist can apply the basic biomechanical principles mentioned in this study to calculate the forces and moments for different hypothetical clinical scenarios.
ERIC Educational Resources Information Center
National Occupational Competency Testing Institute, 2012
2012-01-01
This guide attempts to address an aspect of secondary CTE (Career and Technical Education) that has received little attention; the assessment literacy of educators. School leaders need to go beyond ensuring routine compliance with external and internal regulatory forces to identify ways in which CTE program teachers might better understand…
A Mathematics and Science Trail
ERIC Educational Resources Information Center
Smith, Kathy Horak; Fuentes, Sarah Quebec
2012-01-01
In an attempt to engage primary-school students in a hands-on, real-world problem-solving context, a large urban district, a mathematics and science institute housed in a college of education, and a corporate sponsor in the southwest United States, joined forces to create a mathematics and science trail for fourth- and fifth-grade students. A…
The Aborted Debate within Public Relations: An Approach through Kuhn's Paradigm.
ERIC Educational Resources Information Center
Olasky, Marvin N.
An explanation for the general disdain for the practice of public relations may lie in textbooks that attempt to communicate methodology, while insinuating philosophy. In 10 surveyed public relations textbooks, the authors have tried to explain the contempt for public relations, but their common failing has been to blame outside forces, contending…
ERIC Educational Resources Information Center
Huerta, Luis A.
2009-01-01
This article analyzes how macrolevel institutional forces persist and limit the expansion of decentralized schools that attempt to challenge normative definitions and practices of traditional school organizations. Using qualitative case study methodology, the analysis focuses on how one decentralized charter school navigated and reconciled its…
Linguistic Legitimation of Political Events in Newspaper Discourse
ERIC Educational Resources Information Center
Ali, Marwah Kareem; Christopher, Anne A.; Nordin, Munif Zarirruddin Fikri B.
2016-01-01
This paper examines the discursive structures employed in legitimizing the event of U.S. forces withdrawal from Iraq and identifies them in relation to linguistic features. It attempts to describe the relation between language use and legitimation discursive structures in depicting political events. The paper focuses on the political event of U.S.…
Gender Equity: Educational Problems and Possibilities for Female Students.
ERIC Educational Resources Information Center
Bartholomew, Cheryl G.; Schnorr, Donna L.
Although most women are now working outside the home, gender equity in the labor force has not been achieved. Women are still concentrated in low-paying, traditionally female-dominated occupations (such as clerical and retail sales), while most jobs in the higher paying, more prestigious professions are held by men. Despite attempts to reduce…
The Retirement Decision: A Question of Opportunity?
ERIC Educational Resources Information Center
Rones, Philip L.
1980-01-01
This report attempts to clarify several retirement issues, focusing on (1) the extent to which labor force participation rates can be used to assess retirement decisions; (2) the impact on the elderly of the 1978 Amendments to the Age Discrimination in Employment Act; and (3) the true causes of nonparticipation among current retirees. (SK)
Handbook for the Prevention and Control of Drug Problems.
ERIC Educational Resources Information Center
Parsippany - Troy Hills Board of Education, Parsippany, NJ.
Guidelines for teachers relative to drug abuse are developed in this handbook offering special steps necessary in attempting to prevent and/or guide students with a drug problem. Stress is placed on helping each student individually understand the forces affecting him, and in helping him form the necessary positive attitudes to cope with each…
The "Hollywoodization" of Education Reform in "Won't Back Down"
ERIC Educational Resources Information Center
Goering, Christian Z.; Witte, Shelbie; Jennings Davis, Jennifer; Ward, Peggy; Flammang, Brandon; Gerhardson, Ashley
2015-01-01
What happens when forces attempting to privatize education create and produce a Hollywood film with an education reform plot line? This essay explores "Won't Back Down" through cultural studies and progressive education lenses in an effort to unveil misrepresentations of education and education reform. Drawing on scholarship in these…
Acoustic Behavior of Vapor Bubbles
NASA Technical Reports Server (NTRS)
Prosperetti, Andrea; Oguz, Hasan N.
1996-01-01
In a microgravity environment vapor bubbles generated at a boiling surface tend to remain near it for a long time. This affects the boiling heat transfer and in particular promotes an early transition to the highly inefficient film boiling regime. This paper describes the physical basis underlying attempts to remove the bubbles by means of pressure radiation forces.
Raising Standards 1988 to the Present: A New Performance Policy Era?
ERIC Educational Resources Information Center
Hoskins, Kate
2012-01-01
This article explores the context of the period following the Education Reform Act 1988 in terms of the efforts by successive governments to raise academic standards. These attempts are illustrated by discussion of the impact of the introduction of market forces and parental choice, a centralised National Curriculum and associated assessment…
78 FR 29519 - Physical Protection of Irradiated Reactor Fuel in Transit
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-20
... to the appropriate response forces of any sabotage events, and (3) impede attempts at radiological... personnel so that they could properly respond to a safety or safeguards event. The State of Nevada concluded... destination, and must immediately notify the appropriate agencies in the event of a safeguards event under the...
Local Control and Self-Determination: The San Juan Case.
ERIC Educational Resources Information Center
Garman, Keats; Jack, Donald
Rapidly increasing Navajo enrollment in San Juan County, Utah, public schools in the 1960's forced the rural school district to improve educational services to a sizable Navajo population while attempting to preserve local control in the face of changing Indian self-determination policy. The district implemented a Curriculum Development Center, a…
ERIC Educational Resources Information Center
Howe, Christine; Ilie, Sonia; Guardia, Paula; Hofmann, Riikka; Mercer, Neil; Riga, Fran
2015-01-01
In response to continuing concerns about student attainment and participation in science and mathematics, the "epiSTEMe" project took a novel approach to pedagogy in these two disciplines. Using principles identified as effective in the research literature (and combining these in a fashion not previously attempted), the project developed…
I.Q. in the U.S. Class Structure
ERIC Educational Resources Information Center
Bowles, Samuel; Gintis, Herbert
1972-01-01
Attempts to show that the purportedly scientific'' empirical basis of credentialism and I.Q.-ism'' is false; and to facilitate linkages between the groups who are being discriminated against the workers' movements within the white male labor force, by showing that the same mechanisms are used to divide strata against one another so as to…
12 CFR 269.6 - Unfair labor practices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... exercise of the rights guaranteed in § 269.2(a); (2) dominate or interfere with the formation or... the exercise of the rights guaranteed in § 269.2(a); (2) cause or attempt to cause a Bank to... threat of reprisal or force, or promise of benefit. (d) The Federal Reserve System Labor Relations Panel...
Planning for Tomorrow: Increased Productivity through Education and Training.
ERIC Educational Resources Information Center
Stein, David; Hull, Peggy K.
At The Ohio State University Hospitals Education and Training Department, a data-based strategic planning and coordinating model is being developed to ensure that the educational mission is responsive to the trends and forces affecting the hospital unit and individual productivity. This model is being implemented in order to attempt to meet the…
Prevention of Potential Falls of Elderly Healthy Women: Gait Asymmetry
ERIC Educational Resources Information Center
Seo, Jung-suk; Kim, Sukwon
2014-01-01
The study attempted to see if exercise training would alleviate gait asymmetry between nondominant and dominant legs, thus, eliminate the likelihood of slips. The present study provided 18 older adults exercise training for eight weeks and evaluated kinematics and ground reaction forces (GRFs) in both legs. Participants were randomly assigned to…
ERIC Educational Resources Information Center
Corso, Gail S.; Weiss, Sandra; McGregor, Tiffany
2010-01-01
This narrative describes collaboration among librarians, writing program coordinator, and professors on an information literacy task force. Their attempts to infuse the University's curriculum with information literacy are described. Authors define the term, explain its history with three professional organizations, and describe processes for…