Heavy-tailed distribution of the SSH Brute-force attack duration in a multi-user environment
NASA Astrophysics Data System (ADS)
Lee, Jae-Kook; Kim, Sung-Jun; Park, Chan Yeol; Hong, Taeyoung; Chae, Huiseung
2016-07-01
Quite a number of cyber-attacks to be place against supercomputers that provide highperformance computing (HPC) services to public researcher. Particularly, although the secure shell protocol (SSH) brute-force attack is one of the traditional attack methods, it is still being used. Because stealth attacks that feign regular access may occur, they are even harder to detect. In this paper, we introduce methods to detect SSH brute-force attacks by analyzing the server's unsuccessful access logs and the firewall's drop events in a multi-user environment. Then, we analyze the durations of the SSH brute-force attacks that are detected by applying these methods. The results of an analysis of about 10 thousands attack source IP addresses show that the behaviors of abnormal users using SSH brute-force attacks are based on human dynamic characteristics of a typical heavy-tailed distribution.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
NASA Astrophysics Data System (ADS)
Desnijder, Karel; Hanselaer, Peter; Meuret, Youri
2016-04-01
A key requirement to obtain a uniform luminance for a side-lit LED backlight is the optimised spatial pattern of structures on the light guide that extract the light. The generation of such a scatter pattern is usually performed by applying an iterative approach. In each iteration, the luminance distribution of the backlight with a particular scatter pattern is analysed. This is typically performed with a brute-force ray-tracing algorithm, although this approach results in a time-consuming optimisation process. In this study, the Adding-Doubling method is explored as an alternative way for evaluating the luminance of a backlight. Due to the similarities between light propagating in a backlight with extraction structures and light scattering in a cloud of light scatterers, the Adding-Doubling method which is used to model the latter could also be used to model the light distribution in a backlight. The backlight problem is translated to a form upon which the Adding-Doubling method is directly applicable. The calculated luminance for a simple uniform extraction pattern with the Adding-Doubling method matches the luminance generated by a commercial raytracer very well. Although successful, no clear computational advantage over ray tracers is realised. However, the dynamics of light propagation in a light guide as used the Adding-Doubling method, also allow to enhance the efficiency of brute-force ray-tracing algorithms. The performance of this enhanced ray-tracing approach for the simulation of backlights is also evaluated against a typical brute-force ray-tracing approach.
Near-Neighbor Algorithms for Processing Bearing Data
1989-05-10
neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N
1976-07-30
Interface Requirements 4 3.1.1.1 Interface Block Diagram 4 3.1.1.2 Detailed Interface Definition 7 3.1.1.2.1 Subsystems 7 3.1.1.2.2 Controls & Displays 11 r...116 3.2.3.2 Navigation Brute Force 121 3.2.3.3 Cargo Brute Force 125 3.2.3.4 Sensor Brute Force 129 3.2.3.5 Controls /Displays Brute Force 135 3.2.3.6...STD-T553 Multiplex Data Bus, with the avionic subsystems, flight * control system, the controls /displays, engine sensors, and airframe sensors. 3.1
The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images
NASA Astrophysics Data System (ADS)
Fedotova, S.; Seredin, O.; Kushnir, O.
2017-05-01
In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.
Analysis of brute-force break-ins of a palmprint authentication system.
Kong, Adams W K; Zhang, David; Kamel, Mohamed
2006-10-01
Biometric authentication systems are widely applied because they offer inherent advantages over classical knowledge-based and token-based personal-identification approaches. This has led to the development of products using palmprints as biometric traits and their use in several real applications. However, as biometric systems are vulnerable to replay, database, and brute-force attacks, such potential attacks must be analyzed before biometric systems are massively deployed in security systems. This correspondence proposes a projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system. To validate the proposed model, we have conducted a simulation. Its results demonstrate that the proposed model can accurately estimate the probability. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Probabilistic sampling of protein conformations: new hope for brute force?
Feldman, Howard J; Hogue, Christopher W V
2002-01-01
Protein structure prediction from sequence alone by "brute force" random methods is a computationally expensive problem. Estimates have suggested that it could take all the computers in the world longer than the age of the universe to compute the structure of a single 200-residue protein. Here we investigate the use of a faster version of our FOLDTRAJ probabilistic all-atom protein-structure-sampling algorithm. We have improved the method so that it is now over twenty times faster than originally reported, and capable of rapidly sampling conformational space without lattices. It uses geometrical constraints and a Leonard-Jones type potential for self-avoidance. We have also implemented a novel method to add secondary structure-prediction information to make protein-like amounts of secondary structure in sampled structures. In a set of 100,000 probabilistic conformers of 1VII, 1ENH, and 1PMC generated, the structures with smallest Calpha RMSD from native are 3.95, 5.12, and 5.95A, respectively. Expanding this test to a set of 17 distinct protein folds, we find that all-helical structures are "hit" by brute force more frequently than beta or mixed structures. For small helical proteins or very small non-helical ones, this approach should have a "hit" close enough to detect with a good scoring function in a pool of several million conformers. By fitting the distribution of RMSDs from the native state of each of the 17 sets of conformers to the extreme value distribution, we are able to estimate the size of conformational space for each. With a 0.5A RMSD cutoff, the number of conformers is roughly 2N where N is the number of residues in the protein. This is smaller than previous estimates, indicating an average of only two possible conformations per residue when sterics are accounted for. Our method reduces the effective number of conformations available at each residue by probabilistic bias, without requiring any particular discretization of residue conformational space, and is the fastest method of its kind. With computer speeds doubling every 18 months and parallel and distributed computing becoming more practical, the brute force approach to protein structure prediction may yet have some hope in the near future. Copyright 2001 Wiley-Liss, Inc.
In regulatory assessments, there is a need for reliable estimates of the impacts of precursor emissions from individual sources on secondary PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microns) and ozone. Three potential methods for estimating th...
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
Vulnerability Analysis of the MAVLink Protocol for Command and Control of Unmanned Aircraft
2013-03-27
the cheapest computers currently on the market (the $35 Raspberry Pi [New13, Upt13]) to distribute the workload, a determined attacker would incur a...cCost of Brute-Force) for 6,318 Raspberry Pi systems (x) at $82 per 3DR-enabled Raspberry Pi (RPCost of RasPi) [3DR13, New13] to brute-force all 3,790,800...NIST, 2004. [New13] Newark. Order the Raspberry Pi , November 2013. last accessed: 19 Febru- ary 2014. URL: http://www.newark.com/jsp/search
Nuclear spin imaging with hyperpolarized nuclei created by brute force method
NASA Astrophysics Data System (ADS)
Tanaka, Masayoshi; Kunimatsu, Takayuki; Fujiwara, Mamoru; Kohri, Hideki; Ohta, Takeshi; Utsuro, Masahiko; Yosoi, Masaru; Ono, Satoshi; Fukuda, Kohji; Takamatsu, Kunihiko; Ueda, Kunihiro; Didelez, Jean-P.; Prossati, Giorgio; de Waard, Arlette
2011-05-01
We have been developing a polarized HD target for particle physics at the SPring-8 under the leadership of the RCNP, Osaka University for the past 5 years. Nuclear polarizaton is created by means of the brute force method which uses a high magnetic field (~17 T) and a low temperature (~ 10 mK). As one of the promising applications of the brute force method to life sciences we started a new project, "NSI" (Nuclear Spin Imaging), where hyperpolarized nuclei are used for the MRI (Magnetic Resonance Imaging). The candidate nuclei with spin ½hslash are 3He, 13C, 15N, 19F, 29Si, and 31P, which are important elements for the composition of the biomolecules. Since the NMR signals from these isotopes are enhanced by orders of magnitudes, the spacial resolution in the imaging would be much more improved compared to the practical MRI used so far. Another advantage of hyperpolarized MRI is that the MRI is basically free from the radiation, while the problems of radiation exposure caused by the X-ray CT or PET (Positron Emission Tomography) cannot be neglected. In fact, the risk of cancer for Japanese due to the radiation exposure through these diagnoses is exceptionally high among the advanced countries. As the first step of the NSI project, we are developing a system to produce hyperpolarized 3He gas for the diagnosis of serious lung diseases, for example, COPD (Chronic Obstructive Pulmonary Disease). The system employs the same 3He/4He dilution refrigerator and superconducting solenoidal coil as those used for the polarized HD target with some modification allowing the 3He Pomeranchuk cooling and the following rapid melting of the polarized solid 3He to avoid the depolarization. In this report, the present and future steps of our project will be outlined with some latest experimental results.
The role of the optimization process in illumination design
NASA Astrophysics Data System (ADS)
Gauvin, Michael A.; Jacobsen, David; Byrne, David J.
2015-07-01
This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.
Permeation profiles of Antibiotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez Bautista, Cesar Augusto; Gnanakaran, Sandrasegaram
Presentation describes motivation: Combating bacterial inherent resistance; Drug development mainly uses brute force rather than rational design; Current experimental approaches lack molecular detail.
Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology
Palaparthi, Anil; Riede, Tobias
2017-01-01
Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563
Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection
2008-05-01
with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Strategy for reflector pattern calculation - Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.
1986-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
Strategy for reflector pattern calculation: Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.
1985-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
Shipboard Fluid System Diagnostics Using Non-Intrusive Load Monitoring
2007-06-01
brute.s(3).data; tDPP = brute.s(3).time; FL = brute.s(4).data; tFL = brute.s(4).time; RM = brute.s(5).data; tRM = brute.s(5).time; DPF = brute.s...s’, max(tP1), files(n).name)); ylabel(’Power’); axis tight grid on; subplot(4,1,2); plot( tDPP , DPP, tDPF, DPF) ylabel(’DP Gauges’); axis
2011-10-14
landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on
Fast optimization algorithms and the cosmological constant
NASA Astrophysics Data System (ADS)
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Grover Search and the No-Signaling Principle
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Jordan, Stephen P.
2016-09-01
Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Geradts, Z J; Bijhold, J; Hermsen, R; Murtagh, F
2001-06-01
On the market several systems exist for collecting spent ammunition data for forensic investigation. These databases store images of cartridge cases and the marks on them. Image matching is used to create hit lists that show which marks on a cartridge case are most similar to another cartridge case. The research in this paper is focused on the different methods of feature selection and pattern recognition that can be used for optimizing the results of image matching. The images are acquired by side light images for the breech face marks and by ring light for the firing pin impression. For these images a standard way of digitizing the images used. For the side light images and ring light images this means that the user has to position the cartridge case in the same position according to a protocol. The positioning is important for the sidelight, since the image that is obtained of a striation mark depends heavily on the angle of incidence of the light. In practice, it appears that the user positions the cartridge case with +/-10 degrees accuracy. We tested our algorithms using 49 cartridge cases of 19 different firearms, where the examiner determined that they were shot with the same firearm. For testing, these images were mixed with a database consisting of approximately 4900 images that were available from the Drugfire database of different calibers.In cases where the registration and the light conditions among those matching pairs was good, a simple computation of the standard deviation of the subtracted gray levels, delivered the best-matched images. For images that were rotated and shifted, we have implemented a "brute force" way of registration. The images are translated and rotated until the minimum of the standard deviation of the difference is found. This method did not result in all relevant matches in the top position. This is caused by the effect that shadows and highlights are compared in intensity. Since the angle of incidence of the light will give a different intensity profile, this method is not optimal. For this reason a preprocessing of the images was required. It appeared that the third scale of the "à trous" wavelet transform gives the best results in combination with brute force. Matching the contents of the images is less sensitive to the variation of the lighting. The problem with the brute force method is however that the time for calculation for 49 cartridge cases to compare between them, takes over 1 month of computing time on a Pentium II-computer with 333MHz. For this reason a faster approach is implemented: correlation in log polar coordinates. This gave similar results as the brute force calculation, however it was computed in 24h for a complete database with 4900 images.A fast pre-selection method based on signatures is carried out that is based on the Kanade Lucas Tomasi (KLT) equation. The positions of the points computed with this method are compared. In this way, 11 of the 49 images were in the top position in combination with the third scale of the à trous equation. It depends however on the light conditions and the prominence of the marks if correct matches are found in the top ranked position. All images were retrieved in the top 5% of the database. This method takes only a few minutes for the complete database if, and can be optimized for comparison in seconds if the location of points are stored in files. For further improvement, it is useful to have the refinement in which the user selects the areas that are relevant on the cartridge case for their marks. This is necessary if this cartridge case is damaged and other marks that are not from the firearm appear on it.
TEAM: efficient two-locus epistasis tests in human genome-wide association study.
Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei
2010-06-15
As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.
Reconstructing the evolution of first-row transition metal minerals by GeoDeepDive
NASA Astrophysics Data System (ADS)
Liu, C.; Peters, S. E.; Ross, I.; Golden, J. J.; Downs, R. T.; Hazen, R. M.
2016-12-01
Terrestrial mineralogy evolves as a consequence of a range of physical, chemical, and biological processes [1]. Evolution of the first-row transition metal minerals could mirror the evolution of Earth's oxidation state and life, since these elements mostly are redox-sensitive and/or play critical roles in biology. The fundamental building blocks to reconstruct mineral evolution are the mineral species, locality, and age data, which are typically dispersed in sentences in scientific and technical publications. These data can be tracked down in a brute-force way, i.e., human retrieval, reading, and recording all relevant literature. Alternatively, they can be extracted automatically by GeoDeepDive. In GeoDeepDive, scientific and technical articles from publishers, including Elsevier, Wiley, USGS, SEPM, GSA and Canada Science Publishing, have been parsed into a Javascript database with NLP tags. Sentences containing data of mineral names, locations, and ages can be recognized and extracted by user-developed applications. In a preliminary search for cobalt mineral ages, we successfully extracted 678 citations with >1000 mentions of cobalt minerals, their locations, and ages. The extracted results are in agreement with brute-force search results. What is more, GeoDeepDive provides 40 additional data points that were not recovered by the brute-force approach. The extracted mineral locality-age data suggest that the evolution of Co minerals is controlled by global supercontinent cycles, i.e., more Co minerals form during episodes of supercontinent assembly. Mineral evolution of other first-row transition elements is being investigated through GeoDeepDive. References: [1] Hazen et al. (2008) Mineral evolution. American Mineralogist, 93, 1693-1720
Finding All Solutions to the Magic Hexagram
ERIC Educational Resources Information Center
Holland, Jason; Karabegov, Alexander
2008-01-01
In this article, a systematic approach is given for solving a magic star puzzle that usually is accomplished by trial and error or "brute force." A connection is made to the symmetries of a cube, thus the name Magic Hexahedron.
Use of EPANET solver to manage water distribution in Smart City
NASA Astrophysics Data System (ADS)
Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.
2018-02-01
Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.
Brute force absorption contrast microtomography
NASA Astrophysics Data System (ADS)
Davis, Graham R.; Mills, David
2014-09-01
In laboratory X-ray microtomography (XMT) systems, the signal-to-noise ratio (SNR) is typically determined by the X-ray exposure due to the low flux associated with microfocus X-ray tubes. As the exposure time is increased, the SNR improves up to a point where other sources of variability dominate, such as differences in the sensitivities of adjacent X-ray detector elements. Linear time-delay integration (TDI) readout averages out detector sensitivities on the critical horizontal direction and equiangular TDI also averages out the X-ray field. This allows the SNR to be increased further with increasing exposure. This has been used in dentistry to great effect, allowing subtle variations in dentine mineralisation to be visualised in 3 dimensions. It has also been used to detect ink in ancient parchments that are too damaged to physically unroll. If sufficient contrast between the ink and parchment exists, it is possible to virtually unroll the tomographic image of the scroll in order that the text can be read. Following on from this work, a feasibility test was carried out to determine if it might be possible to recover images from decaying film reels. A successful attempt was made to re-create a short film sequence from a rolled length of 16mm film using XMT. However, the "brute force" method of scaling this up to allow an entire film reel to be imaged presents a significant challenge.
Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro
2016-08-15
Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
Studies on a Spatialized Audio Interface for Sonar
2011-10-03
addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
The End of Flat Earth Economics & the Transition to Renewable Resource Societies.
ERIC Educational Resources Information Center
Henderson, Hazel
1978-01-01
A post-industrial revolution is predicted for the future with an accompanying shift of focus from simple, brute force technolgies, based on cheap, accessible resources and energy, to a second generation of more subtle, refined technologies grounded in a much deeper understanding of biological and ecological realities. (Author/BB)
Faint Debris Detection by Particle Based Track-Before-Detect Method
NASA Astrophysics Data System (ADS)
Uetsuhara, M.; Ikoma, N.
2014-09-01
This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.
A nonperturbative approximation for the moderate Reynolds number Navier–Stokes equations
Roper, Marcus; Brenner, Michael P.
2009-01-01
The nonlinearity of the Navier–Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier–Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes. PMID:19211800
A nonperturbative approximation for the moderate Reynolds number Navier-Stokes equations.
Roper, Marcus; Brenner, Michael P
2009-03-03
The nonlinearity of the Navier-Stokes equations makes predicting the flow of fluid around rapidly moving small bodies highly resistant to all approaches save careful experiments or brute force computation. Here, we show how a linearization of the Navier-Stokes equations captures the drag-determining features of the flow and allows simplified or analytical computation of the drag on bodies up to Reynolds number of order 100. We illustrate the utility of this linearization in 2 practical problems that normally can only be tackled with sophisticated numerical methods: understanding flow separation in the flow around a bluff body and finding drag-minimizing shapes.
A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design
NASA Technical Reports Server (NTRS)
Wallace, Mark S.
2015-01-01
The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.
Dashti, Ali; Komarov, Ivan; D'Souza, Roshan M
2013-01-01
This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG) construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs) and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU). The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Social Epistemology, the Reason of "Reason" and the Curriculum Studies
ERIC Educational Resources Information Center
Popkewitz, Thomas S.
2014-01-01
Not-with-standing the current topoi of the Knowledge Society, a particular "fact" of modernity is that power is exercised less through brute force and more through systems of reason that order and classify what is known and acted on. This article explored the system of reason that orders and classifies what is talked about, thought and…
Managing conflicts in systems development.
Barnett, E
1997-05-01
Conflict in systems development is nothing new. It can vary in intensity, but there will always be two possible outcomes--one constructive and the other destructive. The common approach to conflict management is to draw the battle lines and apply brute force. However, there are other ways to deal with conflict that are more effective and more people oriented.
Code White: A Signed Code Protection Mechanism for Smartphones
2010-09-01
analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2
Tag SNP selection via a genetic algorithm.
Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh
2010-10-01
Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.
Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D
2018-02-22
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).
Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.
2018-01-01
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448
A comparison of approaches for finding minimum identifying codes on graphs
NASA Astrophysics Data System (ADS)
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2014-01-29
We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less
1993-04-23
mechanisms that take into account this new reality. TERRORISM Lastly is the question of terrorism. There can be no two opinions on this most heinous crime ...the notion of an empire "essentially based on force" that had to be maintained, if necessary, "by brute force" see Suhash Chakravarty, The Raj Syndrome ...over power to the National League for Democracy (NLD) led by Aung San Suu Xyi , the daughter of Burma’s independence leader, Aung San. Since then, the
Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz
NASA Astrophysics Data System (ADS)
Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.
2017-03-01
We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
Constraint Optimization Literature Review
2015-11-01
COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18
Strategic Studies Quarterly. Volume 9, Number 2. Summer 2015
2015-01-01
disrupting financial markets. Among other indicators, China’s already deployed and future Type 094 Jin -ciass nuclear ballistic missile submarines (SSBN...on agility instead of brute force re- inforces traditional Chinese military thinking. Since Sun Tzu, the acme of skill has been winning without... mechanical (both political and technical) nature of digital developments. Given this, the nature of system constraints under a dif- ferent future
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Zhou, Y.; Ojeda-May, P.; Nagaraju, M.; Pu, J.
2016-01-01
Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path–force matching (RP–FM) has been developed. In RP–FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP–FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639
An N-body Integrator for Planetary Rings
NASA Astrophysics Data System (ADS)
Hahn, Joseph M.
2011-04-01
A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.
Competitive code-based fast palmprint identification using a set of cover trees
NASA Astrophysics Data System (ADS)
Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan
2009-06-01
A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.
ERIC Educational Resources Information Center
Van Name, Barry
2012-01-01
There is a battlefield where no quarter is given, no mercy shown, but not a single drop of blood is spilled. It is an arena that witnesses the bringing together of high-tech design and manufacture with the outpouring of brute force, under the remotely accessed command of some of today's brightest students. This is the world of battling robots, or…
1994-06-27
success . The key ideas behind the algorithm are: 1. Stopping when one alternative is clearly better than all the others, and 2. Focusing the search on...search algorithm has been implemented on the chess machine Hitech . En route we have developed effective techniques for: "* Dealing with independence of...report describes the implementation, and the results of tests including games played against brute- force programs. The data indicate that B* Hitech is a
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Alexakis, Alexandros
2018-02-01
The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain
2015-06-09
Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.
NASA Astrophysics Data System (ADS)
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
NASA Astrophysics Data System (ADS)
Zhou, Xingyu; Zhuge, Qunbi; Qiu, Meng; Xiang, Meng; Zhang, Fangyuan; Wu, Baojian; Qiu, Kun; Plant, David V.
2018-02-01
We investigate the capacity improvement achieved by bandwidth variable transceivers (BVT) in meshed optical networks with cascaded ROADM filtering at fixed channel spacing, and then propose an artificial neural network (ANN)-aided provisioning scheme to select optimal symbol rate and modulation format for the BVTs in this scenario. Compared with a fixed symbol rate transceiver with standard QAMs, it is shown by both experiments and simulations that BVTs can increase the average capacity by more than 17%. The ANN-aided BVT provisioning method uses parameters monitored from a coherent receiver and then employs a trained ANN to transform these parameters into the desired configuration. It is verified by simulation that the BVT with the proposed provisioning method can approach the upper limit of the system capacity obtained by brute-force search under various degrees of flexibilities.
Aspects of warped AdS3/CFT2 correspondence
NASA Astrophysics Data System (ADS)
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
2016-05-26
to Protect Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER MAJ Ashley E. Welte Se. TASK NUMBER Sf. WORK UNIT NUMBER...III, COL, IN Accepted this 26th day of May 2016 by: ___________________________________, Director, Graduate Degree Programs Robert F. Baumann, PhD The...copyright permission has been obtained for the inclusion of pictures, maps, graphics, and any other works incorporated into this manuscript. A work of the
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.
1991-01-01
This work introduces, examines and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter to spacecraft attitude determination, which is based on vector measurements. Three new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstate the performance of all four schemes.
An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks
NASA Astrophysics Data System (ADS)
Jia, F.; Lichti, D. D.
2018-05-01
Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Nonconservative dynamics in long atomic wires
NASA Astrophysics Data System (ADS)
Cunningham, Brian; Todorov, Tchavdar N.; Dundas, Daniel
2014-09-01
The effect of nonconservative current-induced forces on the ions in a defect-free metallic nanowire is investigated using both steady-state calculations and dynamical simulations. Nonconservative forces were found to have a major influence on the ion dynamics in these systems, but their role in increasing the kinetic energy of the ions decreases with increasing system length. The results illustrate the importance of nonconservative effects in short nanowires and the scaling of these effects with system size. The dependence on bias and ion mass can be understood with the help of a simple pen and paper model. This material highlights the benefit of simple preliminary steady-state calculations in anticipating aspects of brute-force dynamical simulations, and provides rule of thumb criteria for the design of stable quantum wires.
Making Classical Ground State Spin Computing Fault-Tolerant
2010-06-24
approaches to perebor (brute-force searches) algorithms,” IEEE Annals of the History of Computing, 6, 384–400 (1984). [24] D. Bacon and S . T. Flammia ...Adiabatic gate teleportation,” Phys. Rev. Lett., 103, 120504 (2009). [25] D. Bacon and S . T. Flammia , “Adiabatic cluster state quantum computing...v1 [ co nd -m at . s ta t- m ec h] 2 2 Ju n 20 10 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Human problem solving performance in a fault diagnosis task
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1978-01-01
It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
The Trailwatcher: A Collection of Colonel Mike Malone’s Writings
1982-06-21
washtub-sized turtle is boat Stand reaches but more brute force. the six eases its noose ’s head and neck. As the noose , the , short on... nebulous term for who would that?" I saw a functions: was constrain them to work on what to be down here won’t like range cards that any told me...the process never ceases. me on now our factor: mot ion. What motivates a of books that have been written on motivation handle on this nebulous term
2000-09-21
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
2000-09-21
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed “The Brute,” it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Neural-network quantum state tomography
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe
2018-05-01
The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.
A linear-RBF multikernel SVM to classify big text corpora.
Romero, R; Iglesias, E L; Borrajo, L
2015-01-01
Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.
High-order noise filtering in nontrivial quantum logic gates.
Green, Todd; Uys, Hermann; Biercuk, Michael J
2012-07-13
Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
Performance analysis of a dual-tree algorithm for computing spatial distance histograms
Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni
2011-01-01
Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753
Security enhanced BioEncoding for protecting iris codes
NASA Astrophysics Data System (ADS)
Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya
2011-06-01
Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.
Transport and imaging of brute-force 13C hyperpolarization
NASA Astrophysics Data System (ADS)
Hirsch, Matthew L.; Smith, Bryce A.; Mattingly, Mark; Goloshevsky, Artem G.; Rosay, Melanie; Kempf, James G.
2015-12-01
We demonstrate transport of hyperpolarized frozen 1-13C pyruvic acid from its site of production to a nearby facility, where a time series of 13C images was acquired from the aqueous dissolution product. Transportability is tied to the hyperpolarization (HP) method we employ, which omits radical electron species used in other approaches that would otherwise relax away the HP before reaching the imaging center. In particular, we attained 13C HP by 'brute-force', i.e., using only low temperature and high-field (e.g., T < ∼2 K and B ∼ 14 T) to pre-polarize protons to a large Boltzmann value (∼0.4% 1H polarization). After polarizing the neat, frozen sample, ejection quickly (<1 s) passed it through a low field (B < 100 G) to establish the 1H pre-polarization spin temperature on 13C via the process known as low-field thermal mixing (yielding ∼0.1% 13C polarization). By avoiding polarization agents (a.k.a. relaxation agents) that are needed to hyperpolarize by the competing method of dissolution dynamic nuclear polarization (d-DNP), the 13C relaxation time was sufficient to transport the sample for ∼10 min before finally dissolving in warm water and obtaining a 13C image of the hyperpolarized, dilute, aqueous product (∼0.01% 13C polarization, a >100-fold gain over thermal signals in the 1 T scanner). An annealing step, prior to polarizing the sample, was also key for increasing T1 ∼ 30-fold during transport. In that time, HP was maintained using only modest cryogenics and field (T ∼ 60 K and B = 1.3 T), for T1(13C) near 5 min. Much greater time and distance (with much smaller losses) may be covered using more-complete annealing and only slight improvements on transport conditions (e.g., yielding T1 ∼ 5 h at 30 K, 2 T), whereas even intercity transfer is possible (T1 > 20 h) at reasonable conditions of 6 K and 2 T. Finally, it is possible to increase the overall enhancement near d-DNP levels (i.e., 102-fold more) by polarizing below 100 mK, where nanoparticle agents are known to hasten T1 buildup by 100-fold, and to yield very little impact on T1 losses at temperatures relevant to transport.
Security Analysis and Improvements to the PsychoPass Method
2013-01-01
Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
Security analysis and improvements to the PsychoPass method.
Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko
2013-08-13
In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.
Password Cracking Using Sony Playstations
NASA Astrophysics Data System (ADS)
Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet
Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.
DynaGuard: Armoring Canary-Based Protections against Brute-Force Attacks
2015-12-11
public domain. Non-exclusive copying or redistribution is...sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My SQ L Sl ow do w n (n...k 456 .hm me r 458 .sje ng 462 .lib qua ntu m 464 .h2 64r ef 471 .om net pp 473 .as tar 483 .xa lan cbm k Apa che Ng inx Pos tgre SQ L SQ Lite My
The new Mobile Command Center at KSC is important addition to emergency preparedness
NASA Technical Reports Server (NTRS)
2000-01-01
Charles Street, Roger Scheidt and Robert ZiBerna, the Emergency Preparedness team at KSC, sit in the conference room inside the Mobile Command Center, a specially equipped vehicle. Nicknamed '''The Brute,''' it also features computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.
Shortest path problem on a grid network with unordered intermediate points
NASA Astrophysics Data System (ADS)
Saw, Veekeong; Rahman, Amirah; Eng Ong, Wen
2017-10-01
We consider a shortest path problem with single cost factor on a grid network with unordered intermediate points. A two stage heuristic algorithm is proposed to find a feasible solution path within a reasonable amount of time. To evaluate the performance of the proposed algorithm, computational experiments are performed on grid maps of varying size and number of intermediate points. Preliminary results for the problem are reported. Numerical comparisons against brute forcing show that the proposed algorithm consistently yields solutions that are within 10% of the optimal solution and uses significantly less computation time.
Securing Digital Audio using Complex Quadratic Map
NASA Astrophysics Data System (ADS)
Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi
2018-03-01
In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.
Development testing of large volume water sprays for warm fog dispersal
NASA Technical Reports Server (NTRS)
Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.
1986-01-01
A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.
"The Et Tu Brute Complex" Compulsive Self Betrayal
ERIC Educational Resources Information Center
Antus, Robert Lawrence
2006-01-01
In this article, the author discusses "The Et Tu Brute Complex." More specifically, this phenomenon occurs when a person, instead of supporting and befriending himself, orally condemns himself in front of other people and becomes his own worst enemy. This is a form of compulsive self-hatred. Most often, the victim of this complex is unaware of the…
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-01
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.
Automatic Design of Digital Synthetic Gene Circuits
Marchisio, Mario A.; Stelling, Jörg
2011-01-01
De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
Clustering biomolecular complexes by residue contacts similarity.
Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2012-07-01
Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.
Virtual ellipsometry on layered micro-facet surfaces.
Wang, Chi; Wilkie, Alexander; Harcuba, Petr; Novosad, Lukas
2017-09-18
Microfacet-based BRDF models are a common tool to describe light scattering from glossy surfaces. Apart from their wide-ranging applications in optics, such models also play a significant role in computer graphics for photorealistic rendering purposes. In this paper, we mainly investigate the computer graphics aspect of this technology, and present a polarisation-aware brute force simulation of light interaction with both single and multiple layered micro-facet surfaces. Such surface models are commonly used in computer graphics, but the resulting BRDF is ultimately often only approximated. Recently, there has been work to try to make these approximations more accurate, and to better understand the behaviour of existing analytical models. However, these brute force verification attempts still emitted the polarisation state of light and, as we found out, this renders them prone to mis-estimating the shape of the resulting BRDF lobe for some particular material types, such as smooth layered dielectric surfaces. For these materials, non-polarising computations can mis-estimate some areas of the resulting BRDF shape by up to 23%. But we also identified some other material types, such as dielectric layers over rough conductors, for which the difference turned out to be almost negligible. The main contribution of our work is to clearly demonstrate that the effect of polarisation is important for accurate simulation of certain material types, and that there are also other common materials for which it can apparently be ignored. As this required a BRDF simulator that we could rely on, a secondary contribution is that we went to considerable lengths to validate our software. We compare it against a state-of-art model from graphics, a library from optics, and also against ellipsometric measurements of real surface samples.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
Defect-free atomic array formation using the Hungarian matching algorithm
NASA Astrophysics Data System (ADS)
Lee, Woojun; Kim, Hyosub; Ahn, Jaewook
2017-05-01
Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
Computational exploration of neuron and neural network models in neurobiology.
Prinz, Astrid A
2007-01-01
The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.
The new Mobile Command Center at KSC is important addition to emergency preparedness
NASA Technical Reports Server (NTRS)
2000-01-01
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed '''The Brute,''' serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station.
2000-09-21
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
2000-09-21
Charles Street, part of the Emergency Preparedness team at KSC, uses a phone on the specially equipped emergency response vehicle. The vehicle, nicknamed “The Brute,” serves as a mobile command center for emergency preparedness staff and other support personnel when needed. It features a conference room, computer work stations, mobile telephones and a fax machine. It also can generate power with its onboard generator. Besides being ready to respond in case of emergencies during launches, the vehicle must be ready to help address fires, security threats, chemical spills, terrorist attaches, weather damage or other critical situations that might face KSC or Cape Canaveral Air Force Station
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
Single realization stochastic FDTD for weak scattering waves in biological random media
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2015-01-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153
Develop a solution for protecting and securing enterprise networks from malicious attacks
NASA Astrophysics Data System (ADS)
Kamuru, Harshitha; Nijim, Mais
2014-05-01
In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be gathered. This information gathering typically takes the form of various types of scans to determine available hosts, networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis. We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new solution for securing enterprise networks will be developed here.
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
NASA Astrophysics Data System (ADS)
Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming
2015-12-01
Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
Pancoska, Petr; Moravek, Zdenek; Moll, Ute M
2004-01-01
Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-10
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
Tinghög, Gustav; Andersson, David; Västfjäll, Daniel
2017-01-01
According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals’ deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals’ fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals (n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent (p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth. PMID:28424641
Tinghög, Gustav; Andersson, David; Västfjäll, Daniel
2017-01-01
According to luck egalitarianism, inequalities should be deemed fair as long as they follow from individuals' deliberate and fully informed choices (i.e., option luck) while inequalities should be deemed unfair if they follow from choices over which the individual has no control (i.e., brute luck). This study investigates if individuals' fairness preferences correspond with the luck egalitarian fairness position. More specifically, in a laboratory experiment we test how individuals choose to redistribute gains and losses that stem from option luck compared to brute luck. A two-stage experimental design with real incentives was employed. We show that individuals ( n = 226) change their action associated with re-allocation depending on the underlying conception of luck. Subjects in the brute luck treatment equalized outcomes to larger extent ( p = 0.0069). Thus, subjects redistributed a larger amount to unlucky losers and a smaller amount to lucky winners compared to equivalent choices made in the option luck treatment. The effect is less pronounced when conducting the experiment with third-party dictators, indicating that there is some self-serving bias at play. We conclude that people have fairness preference not just for outcomes, but also for how those outcomes are reached. Our findings are potentially important for understanding the role citizens assign individual responsibility for life outcomes, i.e., health and wealth.
Edge Modeling by Two Blur Parameters in Varying Contrasts.
Seo, Suyoung
2018-06-01
This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn
2016-01-01
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486
Uncovering molecular processes in crystal nucleation and growth by using molecular simulation.
Anwar, Jamshed; Zahn, Dirk
2011-02-25
Exploring nucleation processes by molecular simulation provides a mechanistic understanding at the atomic level and also enables kinetic and thermodynamic quantities to be estimated. However, whilst the potential for modeling crystal nucleation and growth processes is immense, there are specific technical challenges to modeling. In general, rare events, such as nucleation cannot be simulated using a direct "brute force" molecular dynamics approach. The limited time and length scales that are accessible by conventional molecular dynamics simulations have inspired a number of advances to tackle problems that were considered outside the scope of molecular simulation. While general insights and features could be explored from efficient generic models, new methods paved the way to realistic crystal nucleation scenarios. The association of single ions in solvent environments, the mechanisms of motif formation, ripening reactions, and the self-organization of nanocrystals can now be investigated at the molecular level. The analysis of interactions with growth-controlling additives gives a new understanding of functionalized nanocrystals and the precipitation of composite materials. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
Exhaustively sampling peptide adsorption with metadynamics.
Deighan, Michael; Pfaendtner, Jim
2013-06-25
Simulating the adsorption of a peptide or protein and obtaining quantitative estimates of thermodynamic observables remains challenging for many reasons. One reason is the dearth of molecular scale experimental data available for validating such computational models. We also lack simulation methodologies that effectively address the dual challenges of simulating protein adsorption: overcoming strong surface binding and sampling conformational changes. Unbiased classical simulations do not address either of these challenges. Previous attempts that apply enhanced sampling generally focus on only one of the two issues, leaving the other to chance or brute force computing. To improve our ability to accurately resolve adsorbed protein orientation and conformational states, we have applied the Parallel Tempering Metadynamics in the Well-Tempered Ensemble (PTMetaD-WTE) method to several explicitly solvated protein/surface systems. We simulated the adsorption behavior of two peptides, LKα14 and LKβ15, onto two self-assembled monolayer (SAM) surfaces with carboxyl and methyl terminal functionalities. PTMetaD-WTE proved effective at achieving rapid convergence of the simulations, whose results elucidated different aspects of peptide adsorption including: binding free energies, side chain orientations, and preferred conformations. We investigated how specific molecular features of the surface/protein interface change the shape of the multidimensional peptide binding free energy landscape. Additionally, we compared our enhanced sampling technique with umbrella sampling and also evaluated three commonly used molecular dynamics force fields.
Quaternion normalization in additive EKF for spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Bar-Itzhack, I. Y.; Deutschmann, J.; Markley, F. L.
1991-01-01
This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude.
Morphodynamic data assimilation used to understand changing coasts
Plant, Nathaniel G.; Long, Joseph W.
2015-01-01
Morphodynamic data assimilation blends observations with model predictions and comes in many forms, including linear regression, Kalman filter, brute-force parameter estimation, variational assimilation, and Bayesian analysis. Importantly, data assimilation can be used to identify sources of prediction errors that lead to improved fundamental understanding. Overall, models incorporating data assimilation yield better information to the people who must make decisions impacting safety and wellbeing in coastal regions that experience hazards due to storms, sea-level rise, and erosion. We present examples of data assimilation associated with morphologic change. We conclude that enough morphodynamic predictive capability is available now to be useful to people, and that we will increase our understanding and the level of detail of our predictions through assimilation of observations and numerical-statistical models.
NASA Astrophysics Data System (ADS)
Basri, M.; Mawengkang, H.; Zamzami, E. M.
2018-03-01
Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients
NASA Astrophysics Data System (ADS)
Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.
2017-09-01
We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.
A Formal Algorithm for Routing Traces on a Printed Circuit Board
NASA Technical Reports Server (NTRS)
Hedgley, David R., Jr.
1996-01-01
This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.
Zimmerman, M I; Bowman, G R
2016-01-01
Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.
Enhanced Sampling Methods for the Computation of Conformational Kinetics in Macromolecules
NASA Astrophysics Data System (ADS)
Grazioli, Gianmarc
Calculating the kinetics of conformational changes in macromolecules, such as proteins and nucleic acids, is still very much an open problem in theoretical chemistry and computational biophysics. If it were feasible to run large sets of molecular dynamics trajectories that begin in one configuration and terminate when reaching another configuration of interest, calculating kinetics from molecular dynamics simulations would be simple, but in practice, configuration spaces encompassing all possible configurations for even the simplest of macromolecules are far too vast for such a brute force approach. In fact, many problems related to searches of configuration spaces, such as protein structure prediction, are considered to be NP-hard. Two approaches to addressing this problem are to either develop methods for enhanced sampling of trajectories that confine the search to productive trajectories without loss of temporal information, or coarse-grained methodologies that recast the problem in reduced spaces that can be exhaustively searched. This thesis will begin with a description of work carried out in the vein of the second approach, where a Smoluchowski diffusion equation model was developed that accurately reproduces the rate vs. force relationship observed in the mechano-catalytic disulphide bond cleavage observed in thioredoxin-catalyzed reduction of disulphide bonds. Next, three different novel enhanced sampling methods developed in the vein of the first approach will be described, which can be employed either separately or in conjunction with each other to autonomously define a set of energetically relevant subspaces in configuration space, accelerate trajectories between the interfaces dividing the subspaces while preserving the distribution of unassisted transition times between subspaces, and approximate time correlation functions from the kinetic data collected from the transitions between interfaces.
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
NASA Astrophysics Data System (ADS)
Matoza, Robin S.; Green, David N.; Le Pichon, Alexis; Shearer, Peter M.; Fee, David; Mialle, Pierrick; Ceranna, Lars
2017-04-01
We experiment with a new method to search systematically through multiyear data from the International Monitoring System (IMS) infrasound network to identify explosive volcanic eruption signals originating anywhere on Earth. Detecting, quantifying, and cataloging the global occurrence of explosive volcanism helps toward several goals in Earth sciences and has direct applications in volcanic hazard mitigation. We combine infrasound signal association across multiple stations with source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent unwanted infrasound signals (clutter) in a global grid, without needing to screen array processing detection lists from individual stations prior to association. We develop the algorithm using case studies of explosive eruptions: 2008 Kasatochi, Alaska; 2009 Sarychev Peak, Kurile Islands; and 2010 Eyjafjallajökull, Iceland. We apply the method to global IMS infrasound data from 2005-2010 to construct a preliminary acoustic catalog that emphasizes sustained explosive volcanic activity (long-duration signals or sequences of impulsive transients lasting hours to days). This work represents a step toward the goal of integrating IMS infrasound data products into global volcanic eruption early warning and notification systems. Additionally, a better understanding of volcanic signal detection and location with the IMS helps improve operational event detection, discrimination, and association capabilities.
Speeding Up the Bilateral Filter: A Joint Acceleration Way.
Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng
2016-06-01
Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Smiley, CalvinJohn; Fakunle, David
The synonymy of Blackness with criminality is not a new phenomenon in America. Documented historical accounts have shown how myths, stereotypes, and racist ideologies led to discriminatory policies and court rulings that fueled racial violence in a post-Reconstruction era and has culminated in the exponential increase of Black male incarceration today. Misconceptions and prejudices manufactured and disseminated through various channels such as the media included references to a " brute " image of Black males. In the 21 st century, this negative imagery of Black males has frequently utilized the negative connotation of the terminology " thug ." In recent years, law enforcement agencies have unreasonably used deadly force on Black males allegedly considered to be "suspects" or "persons of interest." The exploitation of these often-targeted victims' criminal records, physical appearances, or misperceived attributes has been used to justify their unlawful deaths. Despite the connection between disproportionate criminality and Black masculinity, little research has been done on how unarmed Black male victims, particularly but not exclusively at the hands of law enforcement, have been posthumously criminalized. This paper investigates the historical criminalization of Black males and its connection to contemporary unarmed victims of law enforcement. Action research methodology in the data collection process is utilized to interpret how Black male victims are portrayed by traditional mass media, particularly through the use of language, in ways that marginalize and de-victimize these individuals. This study also aims to elucidate a contemporary understanding of race relations, racism, and the plight of the Black male in a 21-century "post-racial" America.
Horsch, Martin; Vrabec, Jadran; Bernreuther, Martin; Grottel, Sebastian; Reina, Guido; Wix, Andrea; Schaber, Karlheinz; Hasse, Hans
2008-04-28
Molecular dynamics (MD) simulation is applied to the condensation process of supersaturated vapors of methane, ethane, and carbon dioxide. Simulations of systems with up to a 10(6) particles were conducted with a massively parallel MD program. This leads to reliable statistics and makes nucleation rates down to the order of 10(30) m(-3) s(-1) accessible to the direct simulation approach. Simulation results are compared to the classical nucleation theory (CNT) as well as the modification of Laaksonen, Ford, and Kulmala (LFK) which introduces a size dependence of the specific surface energy. CNT describes the nucleation of ethane and carbon dioxide excellently over the entire studied temperature range, whereas LFK provides a better approach to methane at low temperatures.
Step to improve neural cryptography against flipping attacks.
Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold
2004-12-01
Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Lary, D. J.; Gencaga, D.; Albayrak, A.; Wei, J.
2013-08-01
Measurements made by satellite remote sensing, Moderate Resolution Imaging Spectroradiometer (MODIS), and globally distributed Aerosol Robotic Network (AERONET) are compared. Comparison of the two datasets measurements for aerosol optical depth values show that there are biases between the two data products. In this paper, we present a general framework towards identifying relevant set of variables responsible for the observed bias. We present a general framework to identify the possible factors influencing the bias, which might be associated with the measurement conditions such as the solar and sensor zenith angles, the solar and sensor azimuth, scattering angles, and surface reflectivity at the various measured wavelengths, etc. Specifically, we performed analysis for remote sensing Aqua-Land data set, and used machine learning technique, neural network in this case, to perform multivariate regression between the ground-truth and the training data sets. Finally, we used mutual information between the observed and the predicted values as the measure of similarity to identify the most relevant set of variables. The search is brute force method as we have to consider all possible combinations. The computations involves a huge number crunching exercise, and we implemented it by writing a job-parallel program.
Artificial immune system algorithm in VLSI circuit configuration
NASA Astrophysics Data System (ADS)
Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd
2017-08-01
In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.
Thirty years since diffuse sound reflection by maximum length
NASA Astrophysics Data System (ADS)
Cox, Trevor J.; D'Antonio, Peter
2005-09-01
This year celebrates the 30th anniversary of Schroeder's seminal paper on sound scattering from maximum length sequences. This paper, along with Schroeder's subsequent publication on quadratic residue diffusers, broke new ground, because they contained simple recipes for designing diffusers with known acoustic performance. So, what has happened in the intervening years? As with most areas of engineering, the room acoustic diffuser has been greatly influenced by the rise of digital computing technologies. Numerical methods have become much more powerful, and this has enabled predictions of surface scattering to greater accuracy and for larger scale surfaces than previously possible. Architecture has also gone through a revolution where the forms of buildings have become more extreme and sculptural. Acoustic diffuser designs have had to keep pace with this to produce shapes and forms that are desirable to architects. To achieve this, design methodologies have moved away from Schroeder's simple equations to brute force optimization algorithms. This paper will look back at the past development of the modern diffuser, explaining how the principles of diffuser design have been devised and revised over the decades. The paper will also look at the present state-of-the art, and dreams for the future.
NASA Astrophysics Data System (ADS)
Dittmar, Harro R.; Kusalik, Peter G.
2016-10-01
As shown previously, it is possible to apply configurational and kinetic thermostats simultaneously in order to induce a steady thermal flux in molecular dynamics simulations of many-particle systems. This flux appears to promote motion along potential gradients and can be utilized to enhance the sampling of ordered arrangements, i.e., it can facilitate the formation of a critical nucleus. Here we demonstrate that the same approach can be applied to molecular systems, and report a significant enhancement of the homogeneous crystal nucleation of a carbon dioxide (EPM2 model) system. Quantitative ordering effects and reduction of the particle mobilities were observed in water (TIP4P-2005 model) and carbon dioxide systems. The enhancement of the crystal nucleation of carbon dioxide was achieved with relatively small conjugate thermal fields. The effect is many orders of magnitude bigger at milder supercooling, where the forward flux sampling method was employed, than at a lower temperature that enabled brute force simulations of nucleation events. The behaviour exhibited implies that the effective free energy barrier of nucleation must have been reduced by the conjugate thermal field in line with our interpretation of previous results for atomic systems.
Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay
2014-01-01
We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.
NASA Astrophysics Data System (ADS)
Tong, Xiaojun; Cui, Minggen; Wang, Zhu
2009-07-01
The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.
Optimal heavy tail estimation - Part 1: Order selection
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel A.
2017-12-01
The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Fast equilibration protocol for million atom systems of highly entangled linear polyethylene chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sliozberg, Yelena R.; TKC Global, Inc., Aberdeen Proving Ground, Maryland 21005; Kröger, Martin
Equilibrated systems of entangled polymer melts cannot be produced using direct brute force equilibration due to the slow reptation dynamics exhibited by high molecular weight chains. Instead, these dense systems are produced using computational techniques such as Monte Carlo-Molecular Dynamics hybrid algorithms, though the use of soft potentials has also shown promise mainly for coarse-grained polymeric systems. Through the use of soft-potentials, the melt can be equilibrated via molecular dynamics at intermediate and long length scales prior to switching to a Lennard-Jones potential. We will outline two different equilibration protocols, which use various degrees of information to produce the startingmore » configurations. In one protocol, we use only the equilibrium bond angle, bond length, and target density during the construction of the simulation cell, where the information is obtained from available experimental data and extracted from the force field without performing any prior simulation. In the second protocol, we moreover utilize the equilibrium radial distribution function and dihedral angle distribution. This information can be obtained from experimental data or from a simulation of short unentangled chains. Both methods can be used to prepare equilibrated and highly entangled systems, but the second protocol is much more computationally efficient. These systems can be strictly monodisperse or optionally polydisperse depending on the starting chain distribution. Our protocols, which utilize a soft-core harmonic potential, will be applied for the first time to equilibrate a million particle system of polyethylene chains consisting of 1000 united atoms at various temperatures. Calculations of structural and entanglement properties demonstrate that this method can be used as an alternative towards the generation of entangled equilibrium structures.« less
SIMBAD : a sequence-independent molecular-replacement pipeline
Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.; ...
2018-06-08
The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less
Low-field thermal mixing in [1-(13)C] pyruvic acid for brute-force hyperpolarization.
Peat, David T; Hirsch, Matthew L; Gadian, David G; Horsewill, Anthony J; Owers-Bradley, John R; Kempf, James G
2016-07-28
We detail the process of low-field thermal mixing (LFTM) between (1)H and (13)C nuclei in neat [1-(13)C] pyruvic acid at cryogenic temperatures (4-15 K). Using fast-field-cycling NMR, (1)H nuclei in the molecule were polarized at modest high field (2 T) and then equilibrated with (13)C nuclei by fast cycling (∼300-400 ms) to a low field (0-300 G) that activates thermal mixing. The (13)C NMR spectrum was recorded after fast cycling back to 2 T. The (13)C signal derives from (1)H polarization via LFTM, in which the polarized ('cold') proton bath contacts the unpolarised ('hot') (13)C bath at a field so low that Zeeman and dipolar interactions are similar-sized and fluctuations in the latter drive (1)H-(13)C equilibration. By varying mixing time (tmix) and field (Bmix), we determined field-dependent rates of polarization transfer (1/τ) and decay (1/T1m) during mixing. This defines conditions for effective mixing, as utilized in 'brute-force' hyperpolarization of low-γ nuclei like (13)C using Boltzmann polarization from nearby protons. For neat pyruvic acid, near-optimum mixing occurs for tmix∼ 100-300 ms and Bmix∼ 30-60 G. Three forms of frozen neat pyruvic acid were tested: two glassy samples, (one well-deoxygenated, the other O2-exposed) and one sample pre-treated by annealing (also well-deoxygenated). Both annealing and the presence of O2 are known to dramatically alter high-field longitudinal relaxation (T1) of (1)H and (13)C (up to 10(2)-10(3)-fold effects). Here, we found smaller, but still critical factors of ∼(2-5)× on both τ and T1m. Annealed, well-deoxygenated samples exhibit the longest time constants, e.g., τ∼ 30-70 ms and T1m∼ 1-20 s, each growing vs. Bmix. Mixing 'turns off' for Bmix > ∼100 G. That T1m≫τ is consistent with earlier success with polarization transfer from (1)H to (13)C by LFTM.
SIMBAD : a sequence-independent molecular-replacement pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkin, Adam J.; Simkovic, Felix; Thomas, Jens M. H.
The conventional approach to finding structurally similar search models for use in molecular replacement (MR) is to use the sequence of the target to search against those of a set of known structures. Sequence similarity often correlates with structure similarity. Given sufficient similarity, a known structure correctly positioned in the target cell by the MR process can provide an approximation to the unknown phases of the target. An alternative approach to identifying homologous structures suitable for MR is to exploit the measured data directly, comparing the lattice parameters or the experimentally derived structure-factor amplitudes with those of known structures. Here,more » SIMBAD , a new sequence-independent MR pipeline which implements these approaches, is presented. SIMBAD can identify cases of contaminant crystallization and other mishaps such as mistaken identity (swapped crystallization trays), as well as solving unsequenced targets and providing a brute-force approach where sequence-dependent search-model identification may be nontrivial, for example because of conformational diversity among identifiable homologues. The program implements a three-step pipeline to efficiently identify a suitable search model in a database of known structures. The first step performs a lattice-parameter search against the entire Protein Data Bank (PDB), rapidly determining whether or not a homologue exists in the same crystal form. The second step is designed to screen the target data for the presence of a crystallized contaminant, a not uncommon occurrence in macromolecular crystallography. Solving structures with MR in such cases can remain problematic for many years, since the search models, which are assumed to be similar to the structure of interest, are not necessarily related to the structures that have actually crystallized. To cater for this eventuality, SIMBAD rapidly screens the data against a database of known contaminant structures. Where the first two steps fail to yield a solution, a final step in SIMBAD can be invoked to perform a brute-force search of a nonredundant PDB database provided by the MoRDa MR software. Through early-access usage of SIMBAD , this approach has solved novel cases that have otherwise proved difficult to solve.« less
Galaxy two-point covariance matrix estimation for next generation surveys
NASA Astrophysics Data System (ADS)
Howlett, Cullan; Percival, Will J.
2017-12-01
We perform a detailed analysis of the covariance matrix of the spherically averaged galaxy power spectrum and present a new, practical method for estimating this within an arbitrary survey without the need for running mock galaxy simulations that cover the full survey volume. The method uses theoretical arguments to modify the covariance matrix measured from a set of small-volume cubic galaxy simulations, which are computationally cheap to produce compared to larger simulations and match the measured small-scale galaxy clustering more accurately than is possible using theoretical modelling. We include prescriptions to analytically account for the window function of the survey, which convolves the measured covariance matrix in a non-trivial way. We also present a new method to include the effects of super-sample covariance and modes outside the small simulation volume which requires no additional simulations and still allows us to scale the covariance matrix. As validation, we compare the covariance matrix estimated using our new method to that from a brute-force calculation using 500 simulations originally created for analysis of the Sloan Digital Sky Survey Main Galaxy Sample. We find excellent agreement on all scales of interest for large-scale structure analysis, including those dominated by the effects of the survey window, and on scales where theoretical models of the clustering normally break down, but the new method produces a covariance matrix with significantly better signal-to-noise ratio. Although only formally correct in real space, we also discuss how our method can be extended to incorporate the effects of redshift space distortions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villarreal, Oscar D.; Yu, Lili; Department of Laboratory Medicine, Yancheng Vocational Institute of Health Sciences, Yancheng, Jiangsu 224006
Computing the ligand-protein binding affinity (or the Gibbs free energy) with chemical accuracy has long been a challenge for which many methods/approaches have been developed and refined with various successful applications. False positives and, even more harmful, false negatives have been and still are a common occurrence in practical applications. Inevitable in all approaches are the errors in the force field parameters we obtain from quantum mechanical computation and/or empirical fittings for the intra- and inter-molecular interactions. These errors propagate to the final results of the computed binding affinities even if we were able to perfectly implement the statistical mechanicsmore » of all the processes relevant to a given problem. And they are actually amplified to various degrees even in the mature, sophisticated computational approaches. In particular, the free energy perturbation (alchemical) approaches amplify the errors in the force field parameters because they rely on extracting the small differences between similarly large numbers. In this paper, we develop a hybrid steered molecular dynamics (hSMD) approach to the difficult binding problems of a ligand buried deep inside a protein. Sampling the transition along a physical (not alchemical) dissociation path of opening up the binding cavity- -pulling out the ligand- -closing back the cavity, we can avoid the problem of error amplifications by not relying on small differences between similar numbers. We tested this new form of hSMD on retinol inside cellular retinol-binding protein 1 and three cases of a ligand (a benzylacetate, a 2-nitrothiophene, and a benzene) inside a T4 lysozyme L99A/M102Q(H) double mutant. In all cases, we obtained binding free energies in close agreement with the experimentally measured values. This indicates that the force field parameters we employed are accurate and that hSMD (a brute force, unsophisticated approach) is free from the problem of error amplification suffered by many sophisticated approaches in the literature.« less
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Comparison of two laryngeal tissue fiber constitutive models
NASA Astrophysics Data System (ADS)
Hunter, Eric J.; Palaparthi, Anil Kumar Reddy; Siegmund, Thomas; Chan, Roger W.
2014-02-01
Biological tissues are complex time-dependent materials, and the best choice of the appropriate time-dependent constitutive description is not evident. This report reviews two constitutive models (a modified Kelvin model and a two-network Ogden-Boyce model) in the characterization of the passive stress-strain properties of laryngeal tissue under tensile deformation. The two models are compared, as are the automated methods for parameterization of tissue stress-strain data (a brute force vs. a common optimization method). Sensitivity (error curves) of parameters from both models and the optimized parameter set are calculated and contrast by optimizing to the same tissue stress-strain data. Both models adequately characterized empirical stress-strain datasets and could be used to recreate a good likeness of the data. Nevertheless, parameters in both models were sensitive to measurement errors or uncertainties in stress-strain, which would greatly hinder the confidence in those parameters. The modified Kelvin model emerges as a potential better choice for phonation models which use a tissue model as one component, or for general comparisons of the mechanical properties of one type of tissue to another (e.g., axial stress nonlinearity). In contrast, the Ogden-Boyce model would be more appropriate to provide a basic understanding of the tissue's mechanical response with better insights into the tissue's physical characteristics in terms of standard engineering metrics such as shear modulus and viscosity.
Kaija, A R; Wilmer, C E
2017-09-08
Designing better porous materials for gas storage or separations applications frequently leverages known structure-property relationships. Reliable structure-property relationships, however, only reveal themselves when adsorption data on many porous materials are aggregated and compared. Gathering enough data experimentally is prohibitively time consuming, and even approaches based on large-scale computer simulations face challenges. Brute force computational screening approaches that do not efficiently sample the space of porous materials may be ineffective when the number of possible materials is too large. Here we describe a general and efficient computational method for mapping structure-property spaces of porous materials that can be useful for adsorption related applications. We describe an algorithm that generates random porous "pseudomaterials", for which we calculate structural characteristics (e.g., surface area, pore size and void fraction) and also gas adsorption properties via molecular simulations. Here we chose to focus on void fraction and Xe adsorption at 1 bar, 5 bar, and 10 bar. The algorithm then identifies pseudomaterials with rare combinations of void fraction and Xe adsorption and mutates them to generate new pseudomaterials, thereby selectively adding data only to those parts of the structure-property map that are the least explored. Use of this method can help guide the design of new porous materials for gas storage and separations applications in the future.
Detecting rare, abnormally large grains by x-ray diffraction
Boyce, Brad L.; Furnish, Timothy Allen; Padilla, H. A.; ...
2015-07-16
Bimodal grain structures are common in many alloys, arising from a number of different causes including incomplete recrystallization and abnormal grain growth. These bimodal grain structures have important technological implications, such as the well-known Goss texture which is now a cornerstone for electrical steels. Yet our ability to detect bimodal grain distributions is largely confined to brute force cross-sectional metallography. The present study presents a new method for rapid detection of unusually large grains embedded in a sea of much finer grains. Traditional X-ray diffraction-based grain size measurement techniques such as Scherrer, Williamson–Hall, or Warren–Averbach rely on peak breadth andmore » shape to extract information regarding the average crystallite size. However, these line broadening techniques are not well suited to identify a very small fraction of abnormally large grains. The present method utilizes statistically anomalous intensity spikes in the Bragg peak to identify regions where abnormally large grains are contributing to diffraction. This needle-in-a-haystack technique is demonstrated on a nanocrystalline Ni–Fe alloy which has undergone fatigue-induced abnormal grain growth. In this demonstration, the technique readily identifies a few large grains that occupy <0.00001 % of the interrogation volume. Finally, while the technique is demonstrated in the current study on nanocrystalline metal, it would likely apply to any bimodal polycrystal including ultrafine grained and fine microcrystalline materials with sufficiently distinct bimodal grain statistics.« less
Sufi, Fahim; Khalil, Ibrahim
2009-04-01
With cardiovascular disease as the number one killer of modern era, Electrocardiogram (ECG) is collected, stored and transmitted in greater frequency than ever before. However, in reality, ECG is rarely transmitted and stored in a secured manner. Recent research shows that eavesdropper can reveal the identity and cardiovascular condition from an intercepted ECG. Therefore, ECG data must be anonymized before transmission over the network and also stored as such in medical repositories. To achieve this, first of all, this paper presents a new ECG feature detection mechanism, which was compared against existing cross correlation (CC) based template matching algorithms. Two types of CC methods were used for comparison. Compared to the CC based approaches, which had 40% and 53% misclassification rates, the proposed detection algorithm did not perform any single misclassification. Secondly, a new ECG obfuscation method was designed and implemented on 15 subjects using added noises corresponding to each of the ECG features. This obfuscated ECG can be freely distributed over the internet without the necessity of encryption, since the original features needed to identify personal information of the patient remain concealed. Only authorized personnel possessing a secret key will be able to reconstruct the original ECG from the obfuscated ECG. Distribution of the would appear as regular ECG without encryption. Therefore, traditional decryption techniques including powerful brute force attack are useless against this obfuscation.
RCNP Project on Polarized {sup 3}He Ion Sources - From Optical Pumping to Cryogenic Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, M.; Inomata, T.; Takahashi, Y.
2009-08-04
A polarized {sup 3}He ion source has been developed at RCNP for intermediate and high energy spin physics. Though we started with an OPPIS (Optical Pumping Polarized Ion Source), it could not provide highly polarized {sup 3}He beam because of fundamental difficulties. Subsequently to this unhappy result, we examined novel types of the polarized {sup 3}He ion source, i.e., EPPIS (Electron Pumping Polarized Ion Source), and ECRPIS (ECR Polarized Ion Source) experimentally or theoretically, respectively. However, attainable {sup 3}He polarization degrees and beam intensities were still insufficient for practical use. A few years later, we proposed a new idea formore » the polarized {sup 3}He ion source, SEPIS (Spin Exchange Polarized Ion Source) which is based on enhanced spin-exchange cross sections at low incident energies for {sup 3}He{sup +}+Rb, and its feasibility was experimentally examined.Recently, we started a project on polarized {sup 3}He gas generated by the brute force method with low temperature (approx4 mK) and strong magnetic field (approx17 T), and rapid melting of highly polarized solid {sup 3}He followed by gasification. When this project will be successful, highly polarized {sup 3}He gas will hopefully be used for a new type of the polarized {sup 3}He ion source.« less
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
NASA Astrophysics Data System (ADS)
Zermeño, Víctor M. R.; Habelok, Krzysztof; Stępień, Mariusz; Grilli, Francesco
2017-03-01
The estimation of the critical current (I c) and AC losses of high-temperature superconductor devices through modeling and simulation requires the knowledge of the critical current density (J c) of the superconducting material. This J c is in general not constant and depends both on the magnitude (B loc) and the direction (θ, relative to the tape) of the local magnetic flux density. In principle, J c(B loc,θ) can be obtained from the experimentally measured critical current I c(B a,θ), where B a is the magnitude of the applied magnetic field. However, for applications where the superconducting materials experience a local field that is close to the self-field of an isolated conductor, obtaining J c(B loc,θ) from I c(B a,θ) is not a trivial task. It is necessary to solve an inverse problem to correct for the contribution derived from the self-field. The methods presented in the literature comprise a series of approaches dealing with different degrees of mathematical regularization to fit the parameters of preconceived nonlinear formulas by means of brute force or optimization methods. In this contribution, we present a parameter-free method that provides excellent reproduction of experimental data and requires no human interaction or preconception of the J c dependence with respect to the magnetic field. In particular, it allows going from the experimental data to a ready-to-run J c(B loc,θ) model in a few minutes.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
Bayesian Model Selection in Geophysics: The evidence
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2016-12-01
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.
Crystal nucleation and metastable bcc phase in charged colloids: A molecular dynamics study
NASA Astrophysics Data System (ADS)
Ji, Xinqiang; Sun, Zhiwei; Ouyang, Wenze; Xu, Shenghua
2018-05-01
The dynamic process of homogenous nucleation in charged colloids is investigated by brute-force molecular dynamics simulation. To check if the liquid-solid transition will pass through metastable bcc, simulations are performed at the state points that definitely lie in the phase region of thermodynamically stable fcc. The simulation results confirm that, in all of these cases, the preordered precursors, acting as the seeds of nucleation, always have predominant bcc symmetry consistent with Ostwald's step rule and the Alexander-McTague mechanism. However, the polymorph selection is not straightforward because the crystal structures formed are not often determined by the symmetry of intermediate precursors but have different characters under different state points. The region of the state point where bcc crystal structures of large enough size are formed during crystallization is narrow, which gives a reasonable explanation as to why the metastable bcc phase in charged colloidal suspensions is rarely detected in macroscopic experiments.
Ab Initio Effective Rovibrational Hamiltonians for Non-Rigid Molecules via Curvilinear VMP2
NASA Astrophysics Data System (ADS)
Changala, Bryan; Baraban, Joshua H.
2017-06-01
Accurate predictions of spectroscopic constants for non-rigid molecules are particularly challenging for ab initio theory. For all but the smallest systems, ``brute force'' diagonalization of the full rovibrational Hamiltonian is computationally prohibitive, leaving us at the mercy of perturbative approaches. However, standard perturbative techniques, such as second order vibrational perturbation theory (VPT2), are based on the approximation that a molecule makes small amplitude vibrations about a well defined equilibrium structure. Such assumptions are physically inappropriate for non-rigid systems. In this talk, we will describe extensions to curvilinear vibrational Møller-Plesset perturbation theory (VMP2) that account for rotational and rovibrational effects in the molecular Hamiltonian. Through several examples, we will show that this approach provides predictions to nearly microwave accuracy of molecular constants including rotational and centrifugal distortion parameters, Coriolis coupling constants, and anharmonic vibrational and tunneling frequencies.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots.
Kazemi, Masoud; Åqvist, Johan
2015-06-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies.
Chemical reaction mechanisms in solution from brute force computational Arrhenius plots
Kazemi, Masoud; Åqvist, Johan
2015-01-01
Decomposition of activation free energies of chemical reactions, into enthalpic and entropic components, can provide invaluable signatures of mechanistic pathways both in solution and in enzymes. Owing to the large number of degrees of freedom involved in such condensed-phase reactions, the extensive configurational sampling needed for reliable entropy estimates is still beyond the scope of quantum chemical calculations. Here we show, for the hydrolytic deamination of cytidine and dihydrocytidine in water, how direct computer simulations of the temperature dependence of free energy profiles can be used to extract very accurate thermodynamic activation parameters. The simulations are based on empirical valence bond models, and we demonstrate that the energetics obtained is insensitive to whether these are calibrated by quantum mechanical calculations or experimental data. The thermodynamic activation parameters are in remarkable agreement with experiment results and allow discrimination among alternative mechanisms, as well as rationalization of their different activation enthalpies and entropies. PMID:26028237
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Unsteady flow sensing and optimal sensor placement using machine learning
NASA Astrophysics Data System (ADS)
Semaan, Richard
2016-11-01
Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.
Decision and function problems based on boson sampling
NASA Astrophysics Data System (ADS)
Nikolopoulos, Georgios M.; Brougham, Thomas
2016-07-01
Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.
An investigation of school violence through Turkish children's drawings.
Yurtal, Filiz; Artut, Kazim
2010-01-01
This study investigates Turkish children's perception of violence in school as represented through drawings and narratives. In all, 66 students (12 to 13 years old) from the middle socioeconomic class participated. To elicit children's perception of violence, they were asked to draw a picture of a violent incident they had heard, experienced, or witnessed. Children mostly drew pictures of violent events among children (33 pictures). Also, there were pictures of violent incidents perpetrated by teachers and directors against children. It was observed that violence influenced children. Violence was mostly depicted in school gardens (38 pictures), but there were violent incidents everywhere, such as in classrooms, corridors, and school stores as well. Moreover, it was found that brute force was the most referred way of violence in the children's depictions (38 pictures). In conclusion, children clearly indicated that there was violence in schools and they were affected by it.
Advances in atmospheric light scattering theory and remote-sensing techniques
NASA Astrophysics Data System (ADS)
Videen, Gorden; Sun, Wenbo; Gong, Wei
2017-02-01
This issue focuses especially on characterizing particles in the Earth-atmosphere system. The significant role of aerosol particles in this system was recognized in the mid-1970s [1]. Since that time, our appreciation for the role they play has only increased. It has been and continues to be one of the greatest unknown factors in the Earth-atmosphere system as evidenced by the most recent Intergovernmental Panel on Climate Change (IPCC) assessments [2]. With increased computational capabilities, in terms of both advanced algorithms and in brute-force computational power, more researchers have the tools available to address different aspects of the role of aerosols in the atmosphere. In this issue, we focus on recent advances in this topical area, especially the role of light scattering and remote sensing. This issue follows on the heels of four previous topical issues on this subject matter that have graced the pages of this journal [3-6].
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
A one-time pad color image cryptosystem based on SHA-3 and multiple chaotic systems
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Wang, Siwei; Zhang, Yingqian; Luo, Chao
2018-04-01
A novel image encryption algorithm is proposed that combines the SHA-3 hash function and two chaotic systems: the hyper-chaotic Lorenz and Chen systems. First, 384 bit keystream hash values are obtained by applying SHA-3 to plaintext. The sensitivity of the SHA-3 algorithm and chaotic systems ensures the effect of a one-time pad. Second, the color image is expanded into three-dimensional space. During permutation, it undergoes plane-plane displacements in the x, y and z dimensions. During diffusion, we use the adjacent pixel dataset and corresponding chaotic value to encrypt each pixel. Finally, the structure of alternating between permutation and diffusion is applied to enhance the level of security. Furthermore, we design techniques to improve the algorithm's encryption speed. Our experimental simulations show that the proposed cryptosystem achieves excellent encryption performance and can resist brute-force, statistical, and chosen-plaintext attacks.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Optical image encryption system using nonlinear approach based on biometric authentication
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Sinha, Aloka
2017-07-01
A nonlinear image encryption scheme using phase-truncated Fourier transform (PTFT) and natural logarithms is proposed in this paper. With the help of the PTFT, the input image is truncated into phase and amplitude parts at the Fourier plane. The phase-only information is kept as the secret key for the decryption, and the amplitude distribution is modulated by adding an undercover amplitude random mask in the encryption process. Furthermore, the encrypted data is kept hidden inside the face biometric-based phase mask key using the base changing rule of logarithms for secure transmission. This phase mask is generated through principal component analysis. Numerical experiments show the feasibility and the validity of the proposed nonlinear scheme. The performance of the proposed scheme has been studied against the brute force attacks and the amplitude-phase retrieval attack. Simulation results are presented to illustrate the enhanced system performance with desired advantages in comparison to the linear cryptosystem.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F
2012-09-14
We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.
Dynamics of neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS
NASA Astrophysics Data System (ADS)
Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.
2011-12-01
Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.
NASA Astrophysics Data System (ADS)
Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.
2013-12-01
Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Tenti, Lorenzo; Maynau, Daniel; Angeli, Celestino; Calzado, Carmen J
2016-07-21
A new strategy based on orthogonal valence-bond analysis of the wave function combined with intermediate Hamiltonian theory has been applied to the evaluation of the magnetic coupling constants in two AF systems. This approach provides both a quantitative estimate of the J value and a detailed analysis of the main physical mechanisms controlling the coupling, using a combined perturbative + variational scheme. The procedure requires a selection of the dominant excitations to be treated variationally. Two methods have been employed: a brute-force selection, using a logic similar to that of the CIPSI approach, or entanglement measures, which identify the most interacting orbitals in the system. Once a reduced set of excitations (about 300 determinants) is established, the interaction matrix is dressed at the second-order of perturbation by the remaining excitations of the CI space. The diagonalization of the dressed matrix provides J values in good agreement with experimental ones, at a very low-cost. This approach demonstrates the key role of d → d* excitations in the quantitative description of the magnetic coupling, as well as the importance of using an extended active space, including the bridging ligand orbitals, for the binuclear model of the intermediates of multicopper oxidases. The method is a promising tool for dealing with complex systems containing several active centers, as an alternative to both pure variational and DFT approaches.
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
A new class of enhanced kinetic sampling methods for building Markov state models
NASA Astrophysics Data System (ADS)
Bhoutekar, Arti; Ghosh, Susmita; Bhattacharya, Swati; Chatterjee, Abhijit
2017-10-01
Markov state models (MSMs) and other related kinetic network models are frequently used to study the long-timescale dynamical behavior of biomolecular and materials systems. MSMs are often constructed bottom-up using brute-force molecular dynamics (MD) simulations when the model contains a large number of states and kinetic pathways that are not known a priori. However, the resulting network generally encompasses only parts of the configurational space, and regardless of any additional MD performed, several states and pathways will still remain missing. This implies that the duration for which the MSM can faithfully capture the true dynamics, which we term as the validity time for the MSM, is always finite and unfortunately much shorter than the MD time invested to construct the model. A general framework that relates the kinetic uncertainty in the model to the validity time, missing states and pathways, network topology, and statistical sampling is presented. Performing additional calculations for frequently-sampled states/pathways may not alter the MSM validity time. A new class of enhanced kinetic sampling techniques is introduced that aims at targeting rare states/pathways that contribute most to the uncertainty so that the validity time is boosted in an effective manner. Examples including straightforward 1D energy landscapes, lattice models, and biomolecular systems are provided to illustrate the application of the method. Developments presented here will be of interest to the kinetic Monte Carlo community as well.
NASA Astrophysics Data System (ADS)
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
Hogan, John D; Klein, Joshua A; Wu, Jiandong; Chopra, Pradeep; Boons, Geert-Jan; Carvalho, Luis; Lin, Cheng; Zaia, Joseph
2018-04-03
Glycosaminoglycans (GAGs) covalently linked to proteoglycans (PGs) are characterized by repeating disaccharide units and variable sulfation patterns along the chain. GAG length and sulfation patterns impact disease etiology, cellular signaling, and structural support for cells. We and others have demonstrated the usefulness of tandem mass spectrometry (MS2) for assigning the structures of GAG saccharides; however, manual interpretation of tandem mass spectra is time-consuming, so computational methods must be employed. In the proteomics domain, the identification of monoisotopic peaks and charge states relies on algorithms that use averagine, or the average building block of the compound class being analyzed. While these methods perform well for protein and peptide spectra, they perform poorly on GAG tandem mass spectra, due to the fact that a single average building block does not characterize the variable sulfation of GAG disaccharide units. In addition, it is necessary to assign product ion isotope patterns in order to interpret the tandem mass spectra of GAG saccharides. To address these problems, we developed GAGfinder, the first tandem mass spectrum peak finding algorithm developed specifically for GAGs. We define peak finding as assigning experimental isotopic peaks directly to a given product ion composition, as opposed to deconvolution or peak picking, which are terms more accurately describing the existing methods previously mentioned. GAGfinder is a targeted, brute force approach to spectrum analysis that utilizes precursor composition information to generate all theoretical fragments. GAGfinder also performs peak isotope composition annotation, which is typically a subsequent step for averagine-based methods. Data are available via ProteomeXchange with identifier PXD009101. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.
Atomistic simulations of materials: Methods for accurate potentials and realistic time scales
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush
This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well. The robustness of the algorithm with respect to the only free parameter it involves is ascertained. The method is then applied to perform tensile tests on gold nanopillars on strain rates as low as 100/s, bringing out the perils of high strain-rate molecular dynamics calculations. We also calculate temperature and stress dependence of activation free energy for surface nucleation of dislocations in pristine gold nanopillars under realistic loads. While maintaining fully atomistic resolution, we reach the fraction-of-a-second time scale regime. It is found that the activation free energy depends significantly and nonlinearly on the driving force (stress or strain) and temperature, leading to very high activation entropies for surface dislocation nucleation.
NASA Astrophysics Data System (ADS)
Matha, Denis; Sandner, Frank; Schlipf, David
2014-12-01
Design verification of wind turbines is performed by simulation of design load cases (DLC) defined in the IEC 61400-1 and -3 standards or equivalent guidelines. Due to the resulting large number of necessary load simulations, here a method is presented to reduce the computational effort for DLC simulations significantly by introducing a reduced nonlinear model and simplified hydro- and aerodynamics. The advantage of the formulation is that the nonlinear ODE system only contains basic mathematic operations and no iterations or internal loops which makes it very computationally efficient. Global turbine extreme and fatigue loads such as rotor thrust, tower base bending moment and mooring line tension, as well as platform motions are outputs of the model. They can be used to identify critical and less critical load situations to be then analysed with a higher fidelity tool and so speed up the design process. Results from these reduced model DLC simulations are presented and compared to higher fidelity models. Results in frequency and time domain as well as extreme and fatigue load predictions demonstrate that good agreement between the reduced and advanced model is achieved, allowing to efficiently exclude less critical DLC simulations, and to identify the most critical subset of cases for a given design. Additionally, the model is applicable for brute force optimization of floater control system parameters.
Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.
Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David
2008-07-01
Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.
NASA Astrophysics Data System (ADS)
Kim, E.; Kim, S.; Kim, H. C.; Kim, B. U.; Cho, J. H.; Woo, J. H.
2017-12-01
In this study, we investigated the contributions of major emission source categories located upwind of South Korea to Particulate Matter (PM) in South Korea. In general, air quality in South Korea is affected by anthropogenic air pollutants emitted from foreign countries including China. Some studies reported that foreign emissions contributed 50 % of annual surface PM total mass concentrations in the Seoul Metropolitan Area, South Korea in 2014. Previous studies examined PM contributions of foreign emissions from all sectors considering meteorological variations. However, little studies conducted to assess contributions of specific foreign source categories. Therefore, we attempted to estimate sectoral contributions of foreign emissions from China to South Korea PM using our air quality forecasting system. We used Model Inter-Comparison Study in Asia 2010 for foreign emissions and Clean Air Policy Support System 2010 emission inventories for domestic emissions. To quantify contributions of major emission sectors to South Korea PM, we applied the Community Multi-scale Air Quality system with brute force method by perturbing emissions from industrial, residential, fossil-fuel power plants, transportation, and agriculture sectors in China. We noted that industrial sector was pre-dominant over the region except during cold season for primary PMs when residential emissions drastically increase due to heating demand. This study will benefit ensemble air quality forecasting and refined control strategy design by providing quantitative assessment on seasonal contributions of foreign emissions from major source categories.
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2015-04-15
By means of molecular dynamics (MD) simulations, we demonstrate that porous graphene can efficiently separate gases according to their molecular sizes. The flux sequence from the classical MD simulation is H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4}, which generally follows the trend in the kinetic diameters. This trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO{sub 2}/N{sub 2} mixtures further demonstrate themore » separation capability of nanoporous graphene. - Graphical abstract: Classical molecular dynamics simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene, in excellent agreement with a recent experiment. - Highlights: • Classical MD simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene. • Free energy calculations yield permeation barriers for those gases. • Selectivities for several gas pairs are estimated from the free-energy barriers and the kinetic theory of gases. • The selectivity trend is in excellent agreement with a recent experiment.« less
Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.
NASA Astrophysics Data System (ADS)
Hawary, A. F.; Razak, N. A.
2018-05-01
Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.
Evaluation of CMAQ and CAMx Ensemble Air Quality Forecasts during the 2015 MAPS-Seoul Field Campaign
NASA Astrophysics Data System (ADS)
Kim, E.; Kim, S.; Bae, C.; Kim, H. C.; Kim, B. U.
2015-12-01
The performance of Air quality forecasts during the 2015 MAPS-Seoul Field Campaign was evaluated. An forecast system has been operated to support the campaign's daily aircraft route decisions for airborne measurements to observe long-range transporting plume. We utilized two real-time ensemble systems based on the Weather Research and Forecasting (WRF)-Sparse Matrix Operator Kernel Emissions (SMOKE)-Comprehensive Air quality Model with extensions (CAMx) modeling framework and WRF-SMOKE- Community Multi_scale Air Quality (CMAQ) framework over northeastern Asia to simulate PM10 concentrations. Global Forecast System (GFS) from National Centers for Environmental Prediction (NCEP) was used to provide meteorological inputs for the forecasts. For an additional set of retrospective simulations, ERA Interim Reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) was also utilized to access forecast uncertainties from the meteorological data used. Model Inter-Comparison Study for Asia (MICS-Asia) and National Institute of Environment Research (NIER) Clean Air Policy Support System (CAPSS) emission inventories are used for foreign and domestic emissions, respectively. In the study, we evaluate the CMAQ and CAMx model performance during the campaign by comparing the results to the airborne and surface measurements. Contributions of foreign and domestic emissions are estimated using a brute force method. Analyses on model performance and emissions will be utilized to improve air quality forecasts for the upcoming KORUS-AQ field campaign planned in 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; John, Dwayne O; Koju, Vijay
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less
Top-down constraints of regional emissions for KORUS-AQ 2016 field campaign
NASA Astrophysics Data System (ADS)
Bae, M.; Yoo, C.; Kim, H. C.; Kim, B. U.; Kim, S.
2017-12-01
Accurate estimations of emission rates form local and international sources are essential in regional air quality simulations, especially in assessing the relative contributions from international emission sources. While bottom-up constructions of emission inventories provide detailed information on specific emission types, they are limited to cover regions with rapid change of anthropogenic emissions (e.g. China) or regions without enough socioeconomic information (e.g. North Korea). We utilized space-borne monitoring of major pollutant precursors to construct a realistic emission inputs for chemistry transport models during the KORUS-AQ 2016 field campaign. Base simulation was conducted using WRF, SMOKE, and CMAQ modeling frame using CREATE 2015 (Asian countries) and CAPSS 2013 (South Korea) emissions inventories. NOx, SO2 and VOC model emissions are adjusted using the column density comparisons ratios (between modeled and observed NO2, SO2 and HCHO column densities) and emission-to-density conversion ratio (from model). Brute force perturbation method was used to separate contributions from North Korea, China and South Korea for flight pathways during the field campaign. Backward-Tracking Model Analyzer (BMA), based on NOAA HYSPLIT trajectory and dispersion model, are also utilized to track histories of chemical processes and emission source apportionment. CMAQ simulations were conducted over East Asia (27-km) and over South and North Korea (9-km) during KORUS-AQ campaign (1st May to 10th June 2016).
Contribution of regional-scale fire events to ozone and PM2.5 ...
Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
A chaotic cryptosystem for images based on Henon and Arnold cat map.
Soleymani, Ali; Nordin, Md Jan; Sundararajan, Elankovan
2014-01-01
The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
Diagnosing the decline in pharmaceutical R&D efficiency.
Scannell, Jack W; Blanckley, Alex; Boldon, Helen; Warrington, Brian
2012-03-01
The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.
Artificial consciousness and the consciousness-attention dissociation.
Haladjian, Harry Haroutioun; Montemayor, Carlos
2016-10-01
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems-these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. Copyright © 2016 Elsevier Inc. All rights reserved.
Verification Test of Automated Robotic Assembly of Space Truss Structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1995-01-01
A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
NASA Astrophysics Data System (ADS)
Millour, Florentin A.; Vannier, Martin; Meilland, Anthony
2012-07-01
We present here three recipes for getting better images with optical interferometers. Two of them, Low- Frequencies Filling and Brute-Force Monte Carlo were used in our participation to the Interferometry Beauty Contest this year and can be applied to classical imaging using V2 and closure phases. These two addition to image reconstruction provide a way of having more reliable images. The last recipe is similar in its principle as the self-calibration technique used in radio-interferometry. We call it also self-calibration, but it uses the wavelength-differential phase as a proxy of the object phase to build-up a full-featured complex visibility set of the observed object. This technique needs a first image-reconstruction run with an available software, using closure-phases and squared visibilities only. We used it for two scientific papers with great success. We discuss here the pros and cons of such imaging technique.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters
NASA Astrophysics Data System (ADS)
Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.
A Chaotic Cryptosystem for Images Based on Henon and Arnold Cat Map
Sundararajan, Elankovan
2014-01-01
The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications. PMID:25258724
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bin, E-mail: bins@ieee.org
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less
Impact-Actuated Digging Tool for Lunar Excavation
NASA Technical Reports Server (NTRS)
Wilson, Jak; Chu, Philip; Craft, Jack; Zacny, Kris; Santoro, Chris
2013-01-01
NASA s plans for a lunar outpost require extensive excavation. The Lunar Surface Systems Project Office projects that thousands of tons of lunar soil will need to be moved. Conventional excavators dig through soil by brute force, and depend upon their substantial weight to react to the forces generated. This approach will not be feasible on the Moon for two reasons: (1) gravity is 1/6th that on Earth, which means that a kg on the Moon will supply 1/6 the down force that it does on Earth, and (2) transportation costs (at the time of this reporting) of $50K to $100K per kg make massive excavators economically unattractive. A percussive excavation system was developed for use in vacuum or nearvacuum environments. It reduces the down force needed for excavation by an order of magnitude by using percussion to assist in soil penetration and digging. The novelty of this excavator is that it incorporates a percussive mechanism suited to sustained operation in a vacuum environment. A percussive digger breadboard was designed, built, and successfully tested under both ambient and vacuum conditions. The breadboard was run in vacuum to more than 2..times the lifetime of the Apollo Lunar Surface Drill, throughout which the mechanism performed and held up well. The percussive digger was demonstrated to reduce the force necessary for digging in lunar soil simulant by an order of magnitude, providing reductions as high as 45:1. This is an enabling technology for lunar site preparation and ISRU (In Situ Resource Utilization) mining activities. At transportation costs of $50K to $100K per kg, reducing digging forces by an order of magnitude translates into billions of dollars saved by not launching heavier systems to accomplish excavation tasks necessary to the establishment of a lunar outpost. Applications on the lunar surface include excavation for habitats, construction of roads, landing pads, berms, foundations, habitat shielding, and ISRU.
Autonomous entropy-based intelligent experimental design
NASA Astrophysics Data System (ADS)
Malakar, Nabin Kumar
2011-07-01
The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.
Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty
A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...
Arm retraction dynamics of entangled star polymers: A forward flux sampling method study
NASA Astrophysics Data System (ADS)
Zhu, Jian; Likhtman, Alexei E.; Wang, Zuowei
2017-07-01
The study of dynamics and rheology of well-entangled branched polymers remains a challenge for computer simulations due to the exponentially growing terminal relaxation times of these polymers with increasing molecular weights. We present an efficient simulation algorithm for studying the arm retraction dynamics of entangled star polymers by combining the coarse-grained slip-spring (SS) model with the forward flux sampling (FFS) method. This algorithm is first applied to simulate symmetric star polymers in the absence of constraint release (CR). The reaction coordinate for the FFS method is determined by finding good agreement of the simulation results on the terminal relaxation times of mildly entangled stars with those obtained from direct shooting SS model simulations with the relative difference between them less than 5%. The FFS simulations are then carried out for strongly entangled stars with arm lengths up to 16 entanglements that are far beyond the accessibility of brute force simulations in the non-CR condition. Apart from the terminal relaxation times, the same method can also be applied to generate the relaxation spectra of all entanglements along the arms which are desired for the development of quantitative theories of entangled branched polymers. Furthermore, we propose a numerical route to construct the experimentally measurable relaxation correlation functions by effectively linking the data stored at each interface during the FFS runs. The obtained star arm end-to-end vector relaxation functions Φ (t ) and the stress relaxation function G(t) are found to be in reasonably good agreement with standard SS simulation results in the terminal regime. Finally, we demonstrate that this simulation method can be conveniently extended to study the arm-retraction problem in entangled star polymer melts with CR by modifying the definition of the reaction coordinate, while the computational efficiency will depend on the particular slip-spring or slip-link model employed.
The Movable Type Method Applied to Protein-Ligand Binding.
Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M
2013-12-10
Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free energy surface eliminating the need to estimate the enthalpy and entropy components individually. Finally, low free energy structures can be obtained via a free energy minimization procedure yielding all low free energy poses on a given free energy surface. Besides revolutionizing the protein-ligand docking and scoring problem this approach can be utilized in a wide range of applications in computational biology which involve the computation of free energies for systems with extensive phase spaces including protein folding, protein-protein docking and protein design.
Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.
Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262
Medical data sheet in safe havens - A tri-layer cryptic solution.
Praveenkumar, Padmapriya; Amirtharajan, Rengarajan; Thenmozhi, K; Balaguru Rayappan, John Bosco
2015-07-01
Secured sharing of the diagnostic reports and scan images of patients among doctors with complementary expertise for collaborative treatment will help to provide maximum care through faster and decisive decisions. In this context, a tri-layer cryptic solution has been proposed and implemented on Digital Imaging and Communications in Medicine (DICOM) images to establish a secured communication for effective referrals among peers without compromising the privacy of patients. In this approach, a blend of three cryptic schemes, namely Latin square image cipher (LSIC), discrete Gould transform (DGT) and Rubik׳s encryption, has been adopted. Among them, LSIC provides better substitution, confusion and shuffling of the image blocks; DGT incorporates tamper proofing with authentication; and Rubik renders a permutation of DICOM image pixels. The developed algorithm has been successfully implemented and tested in both the software (MATLAB 7) and hardware Universal Software Radio Peripheral (USRP) environments. Specifically, the encrypted data were tested by transmitting them through an additive white Gaussian noise (AWGN) channel model. Furthermore, the sternness of the implemented algorithm was validated by employing standard metrics such as the unified average changing intensity (UACI), number of pixels change rate (NPCR), correlation values and histograms. The estimated metrics have also been compared with the existing methods and dominate in terms of large key space to defy brute force attack, cropping attack, strong key sensitivity and uniform pixel value distribution on encryption. Copyright © 2015 Elsevier Ltd. All rights reserved.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
Quantum Heterogeneous Computing for Satellite Positioning Optimization
NASA Astrophysics Data System (ADS)
Bass, G.; Kumar, V.; Dulny, J., III
2016-12-01
Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.
NASA Astrophysics Data System (ADS)
Will, Clifford M.; Wiseman, Alan G.
1996-10-01
We derive the gravitational waveform and gravitational-wave energy flux generated by a binary star system of compact objects (neutron stars or black holes), accurate through second post-Newtonian order (O[(v/c)4]=O[(Gm/rc2)2]) beyond the lowest-order quadrupole approximation. We cast the Einstein equations into the form of a flat-spacetime wave equation together with a harmonic gauge condition, and solve it formally as a retarded integral over the past null cone of the chosen field point. The part of this integral that involves the matter sources and the near-zone gravitational field is evaluated in terms of multipole moments using standard techniques; the remainder of the retarded integral, extending over the radiation zone, is evaluated in a novel way. The result is a manifestly convergent and finite procedure for calculating gravitational radiation to arbitrary orders in a post-Newtonian expansion. Through second post-Newtonian order, the radiation is also shown to propagate toward the observer along true null rays of the asymptotically Schwarzschild spacetime, despite having been derived using flat-spacetime wave equations. The method cures defects that plagued previous ``brute-force'' slow-motion approaches to the generation of gravitational radiation, and yields results that agree perfectly with those recently obtained by a mixed post-Minkowskian post-Newtonian method. We display explicit formulas for the gravitational waveform and the energy flux for two-body systems, both in arbitrary orbits and in circular orbits. In an appendix, we extend the formalism to bodies with finite spatial extent, and derive the spin corrections to the waveform and energy loss.
On grey levels in random CAPTCHA generation
NASA Astrophysics Data System (ADS)
Newton, Fraser; Kouritzin, Michael A.
2011-06-01
A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.
NASA Astrophysics Data System (ADS)
Müller, Rolf
2011-10-01
Bats have evolved one of the most capable and at the same time parsimonious sensory systems found in nature. Using active and passive biosonar as a major - and often sufficient - far sense, different bat species are able to master a wide variety of sensory tasks under very dissimilar sets of constraints. Given the limited computational resources of the bat's brain, this performance is unlikely to be explained as the result of brute-force, black-box-style computations. Instead, the animals must rely heavily on in-built physics knowledge in order to ensure that all required information is encoded reliably into the acoustic signals received at the ear drum. To this end, bats can manipulate the emitted and received signals in the physical domain: By diffracting the outgoing and incoming ultrasonic waves with intricate baffle shapes (i.e., noseleaves and outer ears), the animals can generate selectivity filters that are joint functions of space and frequency. To achieve this, bats employ structural features such as resonance cavities and diffracting ridges. In addition, some bat species can dynamically adjust the shape of their selectivity filters through muscular actuation.
A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher
NASA Astrophysics Data System (ADS)
Kumari, Manju; Gupta, Shailender
2018-03-01
As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.
Proteinortho: detection of (co-)orthologs in large-scale analysis.
Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J
2011-04-28
Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Lilienfeld-Toal, Otto Anatole
2010-11-01
The design of new materials with specific physical, chemical, or biological properties is a central goal of much research in materials and medicinal sciences. Except for the simplest and most restricted cases brute-force computational screening of all possible compounds for interesting properties is beyond any current capacity due to the combinatorial nature of chemical compound space (set of stoichiometries and configurations). Consequently, when it comes to computationally optimizing more complex systems, reliable optimization algorithms must not only trade-off sufficient accuracy and computational speed of the models involved, they must also aim for rapid convergence in terms of number of compoundsmore » 'visited'. I will give an overview on recent progress on alchemical first principles paths and gradients in compound space that appear to be promising ingredients for more efficient property optimizations. Specifically, based on molecular grand canonical density functional theory an approach will be presented for the construction of high-dimensional yet analytical property gradients in chemical compound space. Thereafter, applications to molecular HOMO eigenvalues, catalyst design, and other problems and systems shall be discussed.« less
Can genetic algorithms help virus writers reshape their creations and avoid detection?
NASA Astrophysics Data System (ADS)
Abu Doush, Iyad; Al-Saleh, Mohammed I.
2017-11-01
Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.
Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
NASA Astrophysics Data System (ADS)
Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan
2017-12-01
As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
NASA Astrophysics Data System (ADS)
Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît
2018-01-01
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît
2018-01-07
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Molecular Dynamics Simulations of Protein-Ligand Complexes in Near Physiological Conditions
NASA Astrophysics Data System (ADS)
Wambo, Thierry Oscar
Proteins are important molecules for their key functions. However, under certain circumstances, the function of these proteins needs to be regulated to keep us healthy. Ligands are small molecules often used to modulate the function of proteins. The binding affinity is a quantitative measure of how strong the ligand will modulate the function of the protein: a strong binding affinity will highly impact the performance of the protein. It becomes clear that it is critical to have appropriate techniques to accurately compute the binding affinity. The most difficult task in computer simulations is how to efficiently sample the space spanned by the ligand during the binding process. In this work, we have developed some schemes to compute the binding affinity of a ligand to a protein, and of a metal ion to a protein. Application of these techniques to some complexes yield results in agreement with experimental values. These methods are a brute force approach and make no assumption other than that the complexes are governed by the force field used. Specifically, we computed the free energy of binding between (1) human carbonic anhydrase II and the drug acetazolamide (hcaII-AZM), (2) human carbonic anhydrase II and the zinc ion (hcaII-Zinc), and (3) beta-lactoglobulin and five fatty acids complexes (BLG-FAs). We found the following free energies of binding in unit of kcal/mol: -12.96 +/-2.44 (-15.74) for hcaII-Zinc complex, -5.76+/-0.76 (-5.57) for BLG-OCA , -4.44+/-1.08 (-5.22) for BLG-DKA,-6.89+/-1.25 (-7.24) for BLG-DAO, -8.57+/-0.82 (-8.14) for BLG-MYR, -8.99+/-0.87 (-8.72) for BLG-PLM, and -11.87+/-1.8 (-10.8) for hcaII-AZM. The values inside the parentheses are experimental results. The simulations and quantitative analysis of each system provide interesting insights into the interactions between each entity and helps us to better understand the dynamics of these systems.
Predicting climate change: Uncertainties and prospects for surmounting them
NASA Astrophysics Data System (ADS)
Ghil, Michael
2008-03-01
General circulation models (GCMs) are among the most detailed and sophisticated models of natural phenomena in existence. Still, the lack of robust and efficient subgrid-scale parametrizations for GCMs, along with the inherent sensitivity to initial data and the complex nonlinearities involved, present a major and persistent obstacle to narrowing the range of estimates for end-of-century warming. Estimating future changes in the distribution of climatic extrema is even more difficult. Brute-force tuning the large number of GCM parameters does not appear to help reduce the uncertainties. Andronov and Pontryagin (1937) proposed structural stability as a way to evaluate model robustness. Unfortunately, many real-world systems proved to be structurally unstable. We illustrate these concepts with a very simple model for the El Niño--Southern Oscillation (ENSO). Our model is governed by a differential delay equation with a single delay and periodic (seasonal) forcing. Like many of its more or less detailed and realistic precursors, this model exhibits a Devil's staircase. We study the model's structural stability, describe the mechanisms of the observed instabilities, and connect our findings to ENSO phenomenology. In the model's phase-parameter space, regions of smooth dependence on parameters alternate with rough, fractal ones. We then apply the tools of random dynamical systems and stochastic structural stability to the circle map and a torus map. The effect of noise with compact support on these maps is fairly intuitive: it is the most robust structures in phase-parameter space that survive the smoothing introduced by the noise. The nature of the stochastic forcing matters, thus suggesting that certain types of stochastic parametrizations might be better than others in achieving GCM robustness. This talk represents joint work with M. Chekroun, E. Simonnet and I. Zaliapin.
Chemical reactions induced by oscillating external fields in weak thermal environments
NASA Astrophysics Data System (ADS)
Craven, Galen T.; Bartsch, Thomas; Hernandez, Rigoberto
2015-02-01
Chemical reaction rates must increasingly be determined in systems that evolve under the control of external stimuli. In these systems, when a reactant population is induced to cross an energy barrier through forcing from a temporally varying external field, the transition state that the reaction must pass through during the transformation from reactant to product is no longer a fixed geometric structure, but is instead time-dependent. For a periodically forced model reaction, we develop a recrossing-free dividing surface that is attached to a transition state trajectory [T. Bartsch, R. Hernandez, and T. Uzer, Phys. Rev. Lett. 95, 058301 (2005)]. We have previously shown that for single-mode sinusoidal driving, the stability of the time-varying transition state directly determines the reaction rate [G. T. Craven, T. Bartsch, and R. Hernandez, J. Chem. Phys. 141, 041106 (2014)]. Here, we extend our previous work to the case of multi-mode driving waveforms. Excellent agreement is observed between the rates predicted by stability analysis and rates obtained through numerical calculation of the reactive flux. We also show that the optimal dividing surface and the resulting reaction rate for a reactive system driven by weak thermal noise can be approximated well using the transition state geometry of the underlying deterministic system. This agreement persists as long as the thermal driving strength is less than the order of that of the periodic driving. The power of this result is its simplicity. The surprising accuracy of the time-dependent noise-free geometry for obtaining transition state theory rates in chemical reactions driven by periodic fields reveals the dynamics without requiring the cost of brute-force calculations.
Birefringence study on 3-C/2-D: Barinas Basin (Venezuela)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donati, M.S.; Brown, R.J.
1995-12-31
P-SV data from the Barinas Basin (Venezuela) was processed with the goal of estimating the birefringence effect caused by an anisotropic layer. The target zone is a fractured carbonate reservoir at 3,000 m located in southwestern Venezuela. The time-lag between fast and slow S-waves (S-waves splitting), and the angle between line azimuth and orientation of the natural coordinates are determined using the Harrison rotation method based upon a modeling of the crosscorrelation function between rotated radial and transverse field components. Due to the small statics observed on the brute stacks of radial and transverse components, the time-shift could be associatedmore » with splitting effects due to the carbonate reservoir in this area.« less
ERIC Educational Resources Information Center
Meier, Deborah
2009-01-01
In this article, the author talks about Ted Sizer and describes him as a "schoolman," a Mr. Chips figure with all the romance that surrounded that image. Accustomed to models of brute power, parents, teachers, bureaucrats, and even politicians were attracted to his message of common decency. There's a way of talking about, and to, school people…
Individual Choice and Unequal Participation in Higher Education
ERIC Educational Resources Information Center
Voigt, Kristin
2007-01-01
Does the unequal participation of non-traditional students in higher education indicate social injustice, even if it can be traced back to individuals' choices? Drawing on luck egalitarian approaches,this article suggests that an answer to this question must take into account the effects of unequal brute luck on educational choices. I use a…
Adaptive accelerated ReaxFF reactive dynamics with validation from simulating hydrogen combustion.
Cheng, Tao; Jaramillo-Botero, Andrés; Goddard, William A; Sun, Huai
2014-07-02
We develop here the methodology for dramatically accelerating the ReaxFF reactive force field based reactive molecular dynamics (RMD) simulations through use of the bond boost concept (BB), which we validate here for describing hydrogen combustion. The bond order, undercoordination, and overcoordination concepts of ReaxFF ensure that the BB correctly adapts to the instantaneous configurations in the reactive system to automatically identify the reactions appropriate to receive the bond boost. We refer to this as adaptive Accelerated ReaxFF Reactive Dynamics or aARRDyn. To validate the aARRDyn methodology, we determined the detailed sequence of reactions for hydrogen combustion with and without the BB. We validate that the kinetics and reaction mechanisms (that is the detailed sequences of reactive intermediates and their subsequent transformation to others) for H2 oxidation obtained from aARRDyn agrees well with the brute force reactive molecular dynamics (BF-RMD) at 2498 K. Using aARRDyn, we then extend our simulations to the whole range of combustion temperatures from ignition (798 K) to flame temperature (2998K), and demonstrate that, over this full temperature range, the reaction rates predicted by aARRDyn agree well with the BF-RMD values, extrapolated to lower temperatures. For the aARRDyn simulation at 798 K we find that the time period for half the H2 to form H2O product is ∼538 s, whereas the computational cost was just 1289 ps, a speed increase of ∼0.42 trillion (10(12)) over BF-RMD. In carrying out these RMD simulations we found that the ReaxFF-COH2008 version of the ReaxFF force field was not accurate for such intermediates as H3O. Consequently we reoptimized the fit to a quantum mechanics (QM) level, leading to the ReaxFF-OH2014 force field that was used in the simulations.
Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L
2018-05-31
Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.
2016-12-01
Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam
2014-01-01
In this paper, a unique system level spacecraft design optimization will be presented. A Genetic Algorithm is used to design the global pattern of the reinforcing structure, while a gradient routine is used to adequately stiffen the sub-structure. The system level structural design includes determining the optimal physical location (and number) of reinforcing beams of a lunar pallet lander deck structure. Design of the substructure includes determining placement of secondary stiffeners and the number of rivets required for assembly.. In this optimization, several considerations are taken into account. The primary objective was to raise the primary natural frequencies of the structure such that the Pallet Lander primary structure does not significantly couple with the launch vehicle. A secondary objective is to determine how to properly stiffen the reinforcing beams so that the beam web resists the shear buckling load imparted by the spacecraft components mounted to the pallet lander deck during launch and landing. A third objective is that the calculated stress does not exceed the allowable strength of the material. These design requirements must be met while, minimizing the overall mass of the spacecraft. The final paper will discuss how the optimization was implemented as well as the results. While driven by optimization algorithms, the primary purpose of this effort was to demonstrate the capability of genetic algorithms to enable design automation in the preliminary design cycle. By developing a routine that can automatically generate designs through the use of Finite Element Analysis, considerable design efficiencies, both in time and overall product, can be obtained over more traditional brute force design methods.
Detecting and Cataloging Global Explosive Volcanism Using the IMS Infrasound Network
NASA Astrophysics Data System (ADS)
Matoza, R. S.; Green, D. N.; LE Pichon, A.; Fee, D.; Shearer, P. M.; Mialle, P.; Ceranna, L.
2015-12-01
Explosive volcanic eruptions are among the most powerful sources of infrasound observed on earth, with recordings routinely made at ranges of hundreds to thousands of kilometers. These eruptions can also inject large volumes of ash into heavily travelled aviation corridors, thus posing a significant societal and economic hazard. Detecting and counting the global occurrence of explosive volcanism helps with progress toward several goals in earth sciences and has direct applications in volcanic hazard mitigation. This project aims to build a quantitative catalog of global explosive volcanic activity using the International Monitoring System (IMS) infrasound network. We are developing methodologies to search systematically through IMS infrasound array detection bulletins to identify signals of volcanic origin. We combine infrasound signal association and source location using a brute-force, grid-search, cross-bearings approach. The algorithm corrects for a background prior rate of coherent infrasound signals in a global grid. When volcanic signals are identified, we extract metrics such as location, origin time, acoustic intensity, signal duration, and frequency content, compiling the results into a catalog. We are testing and validating our method on several well-known case studies, including the 2009 eruption of Sarychev Peak, Kuriles, the 2010 eruption of Eyjafjallajökull, Iceland, and the 2015 eruption of Calbuco, Chile. This work represents a step toward the goal of integrating IMS data products into global volcanic eruption early warning and notification systems. Additionally, a better characterization of volcanic signal detection helps improve understanding of operational event detection, discrimination, and association capabilities of the IMS network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, A; Farrell, T; Diamond, K
2014-08-15
Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less
Confronting the Neo-Liberal Brute: Reflections of a Higher Education Middle-Level Manager
ERIC Educational Resources Information Center
Maistry, S. M.
2012-01-01
The higher education scenario in South Africa is fraught with tensions and contradictions. Publicly funded Higher Education Institutions (HEIs) face a particular dilemma. They are expected to fulfill a social mandate which requires a considered response to the needs of the communities in which they are located while simultaneously aspiring for…
Dual-band beacon experiment over Southeast Asia for ionospheric irregularity analysis
NASA Astrophysics Data System (ADS)
Watthanasangmechai, K.; Yamamoto, M.; Saito, A.; Saito, S.; Maruyama, T.; Tsugawa, T.; Nishioka, M.
2013-12-01
An experiment of dual-band beacon over Southeast Asia was started in March 2012 in order to capture and analyze ionospheric irregularities in equatorial region. Five GNU Radio Beacon Receivers (GRBRs) were aligned along 100 degree geographic longitude. The distances between the stations reach more than 500 km. The field of view of this observational network covers +/- 20 degree geomagnetic latitude including the geomagnetic equator. To capture ionospheric irregularities, the absolute TEC estimation technique was developed. The two-station method (Leitinger et al., 1975) is generally accepted as a suitable method to estimate TEC offsets of dual-band beacon experiment. However, the distances between the stations directly affect on the robustness of the technique. In Southeast Asia, the observational network is too sparse to attain a benefit of the classic two-station method. Moreover, the least-squares approch used in the two-station method tries too much to adjust the small scales of the TEC distribution which are the local minima. We thus propose a new technique to estimate the TEC offsets with the supporting data from absolute GPS-TEC from local GPS receivers and the ionospheric height from local ionosondes. The key of the proposed technique is to utilize the brute-force technique with weighting function to find the TEC offset set that yields a global minimum of RMSE in whole parameter space. The weight is not necessary when the TEC distribution is smooth, while it significantly improves the TEC estimation during the ESF events. As a result, the latitudinal TEC shows double-hump distribution because of the Equatorial Ionization Anomaly (EIA). In additions, the 100km-scale fluctuations from an Equatorial Spread F (ESF) are captured at night time in equinox seasons. The plausible linkage of the meridional wind with triggering of ESF is under invatigating and will be presented. The proposed method is successful to estimate the latitudinal TEC distribution from dual-band frequency beacon data for the sparse observational network in Southeast Asia which may be useful for other equatorial sectors like Affrican region as well.
Planetary geomorphology: Some historical/analytical perspectives
NASA Astrophysics Data System (ADS)
Baker, V. R.
2015-07-01
Three broad themes from the history of planetary geomorphology provide lessons in regard to the logic (valid reasoning processes) for the doing of that science. The long controversy over the origin of lunar craters, which was dominated for three centuries by the volcanic hypothesis, provides examples of reasoning on the basis of authority and a priori presumptions. Percival Lowell's controversy with geologists over the nature of linear markings on the surface of Mars illustrates the role of tenacity in regard to the beliefs of some individual scientists. Finally, modern controversies over the role of water in shaping the surface of Mars illustrate how the a priori method, i.e., belief produced according to reason, can seductively cloud the scientific openness to the importance of brute facts that deviate from a prevailing paradigm.
PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems
NASA Technical Reports Server (NTRS)
Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.
1995-01-01
PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.
NASA Astrophysics Data System (ADS)
Shaw, A. D.; Champneys, A. R.; Friswell, M. I.
2016-08-01
Sudden onset of violent chattering or whirling rotor-stator contact motion in rotational machines can cause significant damage in many industrial applications. It is shown that internal resonance can lead to the onset of bouncing-type partial contact motion away from primary resonances. These partial contact limit cycles can involve any two modes of an arbitrarily high degree-of-freedom system, and can be seen as an extension of a synchronization condition previously reported for a single disc system. The synchronization formula predicts multiple drivespeeds, corresponding to different forms of mode-locked bouncing orbits. These results are backed up by a brute-force bifurcation analysis which reveals numerical existence of the corresponding family of bouncing orbits at supercritical drivespeeds, provided the damping is sufficiently low. The numerics reveal many overlapping families of solutions, which leads to significant multi-stability of the response at given drive speeds. Further, secondary bifurcations can also occur within each family, altering the nature of the response and ultimately leading to chaos. It is illustrated how stiffness and damping of the stator have a large effect on the number and nature of the partial contact solutions, illustrating the extreme sensitivity that would be observed in practice.
Automatic Generation of Data Types for Classification of Deep Web Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngu, A H; Buttler, D J; Critchlow, T J
2005-02-14
A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less
Large-scale detection of repetitions
Smyth, W. F.
2014-01-01
Combinatorics on words began more than a century ago with a demonstration that an infinitely long string with no repetitions could be constructed on an alphabet of only three letters. Computing all the repetitions (such as ⋯TTT⋯ or ⋯CGACGA⋯ ) in a given string x of length n is one of the oldest and most important problems of computational stringology, requiring time in the worst case. About a dozen years ago, it was discovered that repetitions can be computed as a by-product of the Θ(n)-time computation of all the maximal periodicities or runs in x. However, even though the computation is linear, it is also brute force: global data structures, such as the suffix array, the longest common prefix array and the Lempel–Ziv factorization, need to be computed in a preprocessing phase. Furthermore, all of this effort is required despite the fact that the expected number of runs in a string is generally a small fraction of the string length. In this paper, I explore the possibility that repetitions (perhaps also other regularities in strings) can be computed in a manner commensurate with the size of the output. PMID:24751872
Security and matching of partial fingerprint recognition systems
NASA Astrophysics Data System (ADS)
Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.
2004-08-01
Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).
1967-07-28
This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes
1965-05-01
This photograph depicts a view of the test firing of all five F-1 engines for the Saturn V S-IC test stage at the Marshall Space Flight Center. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. The S-IC Static Test Stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level, and was required to hold down the brute force of the 7,500,000-pound thrust. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900-ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minutes.
Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding
NASA Astrophysics Data System (ADS)
Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool
2017-12-01
In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.
Mapping PDB chains to UniProtKB entries.
Martin, Andrew C R
2005-12-01
UniProtKB/SwissProt is the main resource for detailed annotations of protein sequences. This database provides a jumping-off point to many other resources through the links it provides. Among others, these include other primary databases, secondary databases, the Gene Ontology and OMIM. While a large number of links are provided to Protein Data Bank (PDB) files, obtaining a regularly updated mapping between UniProtKB entries and PDB entries at the chain or residue level is not straightforward. In particular, there is no regularly updated resource which allows a UniProtKB/SwissProt entry to be identified for a given residue of a PDB file. We have created a completely automatically maintained database which maps PDB residues to residues in UniProtKB/SwissProt and UniProtKB/trEMBL entries. The protocol uses links from PDB to UniProtKB, from UniProtKB to PDB and a brute-force sequence scan to resolve PDB chains for which no annotated link is available. Finally the sequences from PDB and UniProtKB are aligned to obtain a residue-level mapping. The resource may be queried interactively or downloaded from http://www.bioinf.org.uk/pdbsws/.
Proteinortho: Detection of (Co-)orthologs in large-scale analysis
2011-01-01
Background Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. Results The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Conclusions Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware. PMID:21526987
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Expert system for on-board satellite scheduling and control
NASA Technical Reports Server (NTRS)
Barry, John M.; Sary, Charisse
1988-01-01
An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.
Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders
2014-10-01
The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.
Full counting statistics of conductance for disordered systems
NASA Astrophysics Data System (ADS)
Fu, Bin; Zhang, Lei; Wei, Yadong; Wang, Jian
2017-09-01
Quantum transport is a stochastic process in nature. As a result, the conductance is fully characterized by its average value and fluctuations, i.e., characterized by full counting statistics (FCS). Since disorders are inevitable in nanoelectronic devices, it is important to understand how FCS behaves in disordered systems. The traditional approach dealing with fluctuations or cumulants of conductance uses diagrammatic perturbation expansion of the Green's function within coherent potential approximation (CPA), which is extremely complicated especially for high order cumulants. In this paper, we develop a theoretical formalism based on nonequilibrium Green's function by directly taking the disorder average on the generating function of FCS of conductance within CPA. This is done by mapping the problem into higher dimensions so that the functional dependence of generating a function on the Green's function becomes linear and the diagrammatic perturbation expansion is not needed anymore. Our theory is very simple and allows us to calculate cumulants of conductance at any desired order efficiently. As an application of our theory, we calculate the cumulants of conductance up to fifth order for disordered systems in the presence of Anderson and binary disorders. Our numerical results of cumulants of conductance show remarkable agreement with that obtained by the brute force calculation.
A Site Density Functional Theory for Water: Application to Solvation of Amino Acid Side Chains.
Liu, Yu; Zhao, Shuangliang; Wu, Jianzhong
2013-04-09
We report a site density functional theory (SDFT) based on the conventional atomistic models of water and the universality ansatz of the bridge functional. The excess Helmholtz energy functional is formulated in terms of a quadratic expansion with respect to the local density deviation from that of a uniform system and a universal functional for all higher-order terms approximated by that of a reference hard-sphere system. With the atomistic pair direct correlation functions of the uniform system calculated from MD simulation and an analytical expression for the bridge functional from the modified fundamental measure theory, the SDFT can be used to predict the structure and thermodynamic properties of water under inhomogeneous conditions with a computational cost negligible in comparison to that of brute-force simulations. The numerical performance of the SDFT has been demonstrated with the predictions of the solvation free energies of 15 molecular analogs of amino acid side chains in water represented by SPC/E, SPC, and TIP3P models. For theTIP3P model, a comparison of the theoretical predictions with MD simulation and experimental data shows agreement within 0.64 and 1.09 kcal/mol on average, respectively.
Multibeam Gpu Transient Pipeline for the Medicina BEST-2 Array
NASA Astrophysics Data System (ADS)
Magro, A.; Hickish, J.; Adami, K. Z.
2013-09-01
Radio transient discovery using next generation radio telescopes will pose several digital signal processing and data transfer challenges, requiring specialized high-performance backends. Several accelerator technologies are being considered as prototyping platforms, including Graphics Processing Units (GPUs). In this paper we present a real-time pipeline prototype capable of processing multiple beams concurrently, performing Radio Frequency Interference (RFI) rejection through thresholding, correcting for the delay in signal arrival times across the frequency band using brute-force dedispersion, event detection and clustering, and finally candidate filtering, with the capability of persisting data buffers containing interesting signals to disk. This setup was deployed at the BEST-2 SKA pathfinder in Medicina, Italy, where several benchmarks and test observations of astrophysical transients were conducted. These tests show that on the deployed hardware eight 20 MHz beams can be processed simultaneously for 640 Dispersion Measure (DM) values. Furthermore, the clustering and candidate filtering algorithms employed prove to be good candidates for online event detection techniques. The number of beams which can be processed increases proportionally to the number of servers deployed and number of GPUs, making it a viable architecture for current and future radio telescopes.
NASA Astrophysics Data System (ADS)
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Geomagnetic Cutoff Rigidity Computer Program: Theory, Software Description and Example
NASA Technical Reports Server (NTRS)
Smart, D. F.; Shea, M. A.
2001-01-01
The access of charged particles to the earth from space through the geomagnetic field has been of interest since the discovery of the cosmic radiation. The early cosmic ray measurements found that cosmic ray intensity was ordered by the magnetic latitude and the concept of cutoff rigidity was developed. The pioneering work of Stoermer resulted in the theory of particle motion in the geomagnetic field, but the fundamental mathematical equations developed have 'no solution in closed form'. This difficulty has forced researchers to use the 'brute force' technique of numerical integration of individual trajectories to ascertain the behavior of trajectory families or groups. This requires that many of the trajectories must be traced in order to determine what energy (or rigidity) a charged particle must have to penetrate the magnetic field and arrive at a specified position. It turned out the cutoff rigidity was not a simple quantity but had many unanticipated complexities that required many hundreds if not thousands of individual trajectory calculations to solve. The accurate calculation of particle trajectories in the earth's magnetic field is a fundamental problem that limited the efficient utilization of cosmic ray measurements during the early years of cosmic ray research. As the power of computers has improved over the decades, the numerical integration procedure has grown more tractable, and magnetic field models of increasing accuracy and complexity have been utilized. This report is documentation of a general FORTRAN computer program to trace the trajectory of a charged particle of a specified rigidity from a specified position and direction through a model of the geomagnetic field.
Huang, Yeqi; Deng, Tao; Li, Zhenning; Wang, Nan; Yin, Chanqin; Wang, Shiqiang; Fan, Shaojia
2018-09-01
This article uses the WRF-CMAQ model to systematically study the source apportionment of PM 2.5 under typical meteorological conditions in the dry season (November 2010) in the Pearl River Delta (PRD). According to the geographical location and the relative magnitude of pollutant emission, Guangdong Province is divided into eight subdomains for source apportionment study. The Brute-Force Method (BFM) method was implemented to simulate the contribution from different regions to the PM 2.5 pollution in the PRD. Results show that the industrial sources accounted for the largest proportion. For emission species, the total amount of NO x and VOC in Guangdong Province, and NH 3 and VOC in Hunan Province are relatively larger. In Guangdong Province, the emission of SO 2 , NO x and VOC in the PRD are relatively larger, and the NH 3 emissions are higher outside the PRD. In northerly-controlled episodes, model simulations demonstrate that local emissions are important for PM 2.5 pollution in Guangzhou and Foshan. Meanwhile, emissions from Dongguan and Huizhou (DH), and out of Guangdong Province (SW) are important contributors for PM 2.5 pollution in Guangzhou. For PM 2.5 pollution in Foshan, emissions in Guangzhou and DH are the major contributors. In addition, high contribution ratio from DH only occurs in severe pollution periods. In southerly-controlled episode, contribution from the southern PRD increases. Local emissions and emissions from Shenzhen, DH, Zhuhai-Jiangmen-Zhongshan (ZJZ) are the major contributors. Regional contribution to the chemical compositions of PM 2.5 indicates that the sources of chemical components are similar to those of PM 2.5 . In particular, SO 4 2- is mainly sourced from emissions out of Guangdong Province, while the NO 3- and NH 4+ are more linked to agricultural emissions. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Timothy D.; Spiegel, David S.; McElwain, Michael W.
2014-10-20
We conduct a statistical analysis of a combined sample of direct imaging data, totalling nearly 250 stars. The stars cover a wide range of ages and spectral types, and include five detections (κ And b, two ∼60 M {sub J} brown dwarf companions in the Pleiades, PZ Tel B, and CD–35 2722B). For some analyses we add a currently unpublished set of SEEDS observations, including the detections GJ 504b and GJ 758B. We conduct a uniform, Bayesian analysis of all stellar ages using both membership in a kinematic moving group and activity/rotation age indicators. We then present a new statisticalmore » method for computing the likelihood of a substellar distribution function. By performing most of the integrals analytically, we achieve an enormous speedup over brute-force Monte Carlo. We use this method to place upper limits on the maximum semimajor axis of the distribution function derived from radial-velocity planets, finding model-dependent values of ∼30-100 AU. Finally, we model the entire substellar sample, from massive brown dwarfs to a theoretically motivated cutoff at ∼5 M {sub J}, with a single power-law distribution. We find that p(M, a)∝M {sup –0.65} {sup ±} {sup 0.60} a {sup –0.85} {sup ±} {sup 0.39} (1σ errors) provides an adequate fit to our data, with 1.0%-3.1% (68% confidence) of stars hosting 5-70 M {sub J} companions between 10 and 100 AU. This suggests that many of the directly imaged exoplanets known, including most (if not all) of the low-mass companions in our sample, formed by fragmentation in a cloud or disk, and represent the low-mass tail of the brown dwarfs.« less
NASA Astrophysics Data System (ADS)
Marhadi, Kun Saptohartyadi
Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.
g_contacts: Fast contact search in bio-molecular ensemble data
NASA Astrophysics Data System (ADS)
Blau, Christian; Grubmuller, Helmut
2013-12-01
Short-range interatomic interactions govern many bio-molecular processes. Therefore, identifying close interaction partners in ensemble data is an essential task in structural biology and computational biophysics. A contact search can be cast as a typical range search problem for which efficient algorithms have been developed. However, none of those has yet been adapted to the context of macromolecular ensembles, particularly in a molecular dynamics (MD) framework. Here a set-decomposition algorithm is implemented which detects all contacting atoms or residues in maximum O(Nlog(N)) run-time, in contrast to the O(N2) complexity of a brute-force approach. Catalogue identifier: AEQA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 8945 No. of bytes in distributed program, including test data, etc.: 981604 Distribution format: tar.gz Programming language: C99. Computer: PC. Operating system: Linux. RAM: ≈Size of input frame Classification: 3, 4.14. External routines: Gromacs 4.6[1] Nature of problem: Finding atoms or residues that are closer to one another than a given cut-off. Solution method: Excluding distant atoms from distance calculations by decomposing the given set of atoms into disjoint subsets. Running time:≤O(Nlog(N)) References: [1] S. Pronk, S. Pall, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R. Shirts, J.C. Smith, P. M. Kasson, D. van der Spoel, B. Hess and Erik Lindahl, Gromacs 4.5: a high-throughput and highly parallel open source molecular simulation toolkit, Bioinformatics 29 (7) (2013).
A theoretical formulation of the electrophysiological inverse problem on the sphere
NASA Astrophysics Data System (ADS)
Riera, Jorge J.; Valdés, Pedro A.; Tanabe, Kunio; Kawashima, Ryuta
2006-04-01
The construction of three-dimensional images of the primary current density (PCD) produced by neuronal activity is a problem of great current interest in the neuroimaging community, though being initially formulated in the 1970s. There exist even now enthusiastic debates about the authenticity of most of the inverse solutions proposed in the literature, in which low resolution electrical tomography (LORETA) is a focus of attention. However, in our opinion, the capabilities and limitations of the electro and magneto encephalographic techniques to determine PCD configurations have not been extensively explored from a theoretical framework, even for simple volume conductor models of the head. In this paper, the electrophysiological inverse problem for the spherical head model is cast in terms of reproducing kernel Hilbert spaces (RKHS) formalism, which allows us to identify the null spaces of the implicated linear integral operators and also to define their representers. The PCD are described in terms of a continuous basis for the RKHS, which explicitly separates the harmonic and non-harmonic components. The RKHS concept permits us to bring LORETA into the scope of the general smoothing splines theory. A particular way of calculating the general smoothing splines is illustrated, avoiding a brute force discretization prematurely. The Bayes information criterion is used to handle dissimilarities in the signal/noise ratios and physical dimensions of the measurement modalities, which could affect the estimation of the amount of smoothness required for that class of inverse solution to be well specified. In order to validate the proposed method, we have estimated the 3D spherical smoothing splines from two data sets: electric potentials obtained from a skull phantom and magnetic fields recorded from subjects performing an experiment of human faces recognition.
Chen, Chun-Teh; Martin-Martinez, Francisco J.; Jung, Gang Seob
2017-01-01
A set of computational methods that contains a brute-force algorithmic generation of chemical isomers, molecular dynamics (MD) simulations, and density functional theory (DFT) calculations is reported and applied to investigate nearly 3000 probable molecular structures of polydopamine (PDA) and eumelanin. All probable early-polymerized 5,6-dihydroxyindole (DHI) oligomers, ranging from dimers to tetramers, have been systematically analyzed to find the most stable geometry connections as well as to propose a set of molecular models that represents the chemically diverse nature of PDA and eumelanin. Our results indicate that more planar oligomers have a tendency to be more stable. This finding is in good agreement with recent experimental observations, which suggested that PDA and eumelanin are composed of nearly planar oligomers that appear to be stacked together via π–π interactions to form graphite-like layered aggregates. We also show that there is a group of tetramers notably more stable than the others, implying that even though there is an inherent chemical diversity in PDA and eumelanin, the molecular structures of the majority of the species are quite repetitive. Our results also suggest that larger oligomers are less likely to form. This observation is also consistent with experimental measurements, supporting the existence of small oligomers instead of large polymers as main components of PDA and eumelanin. In summary, this work brings an insight into the controversial structure of PDA and eumelanin, explaining some of the most important structural features, and providing a set of molecular models for more accurate modeling of eumelanin-like materials. PMID:28451292
NASA Astrophysics Data System (ADS)
Tian, H.; Liu, S.; Zhu, C.; Liu, H.; Wu, B.
2017-12-01
Abstract: Anthropogenic atmospheric emissions of air pollutants have caused worldwide concerns due to their adverse effects on human health and the ecosystem. By determining the best available emission factors for varied source categories, we established the comprehensive atmospheric emission inventories of hazardous air pollutants including 12 typical toxic heavy metals (Hg, As, Se, Pb, Cd, Cr, Ni, Sb, Mn, Co, Cu, and Zn) from primary anthropogenic activities in Beijing-Tianjin-Hebei (BTH) region of China for the period of 2012 for the first time. The annual emissions of these pollutants were allocated at a high spatial resolution of 9km × 9km grid with ArcGIS methodology and surrogate indexes, such as regional population and gross domestic product (GDP). Notably, the total heavy metal emissions from this region represented about 10.9% of the Chinese national total emissions. The areas with high emissions of heavy metals were mainly concentrated in Tangshan, Shijiazhuang, Handan and Tianjin. Further, WRF-CMAQ modeling system were applied to simulate the regional concentration of heavy metals to explore their spatial-temporal variations, and the source apportionment of these heavy metals in BTH region was performed using the Brute-Force method. Finally, integrated countermeasures were proposed to minimize the final air pollutants discharge on account of the current and future demand of energy-saving and pollution reduction in China. Keywords: heavy metals; particulate matter; emission inventory; CMAQ model; source apportionment Acknowledgment. This work was funded by the National Natural Science Foundation of China (21377012 and 21177012) and the Trail Special Program of Research on the Cause and Control Technology of Air Pollution under the National Key Research and Development Plan of China (2016YFC0201501).
Multifractal model of magnetic susceptibility distributions in some igneous rocks
Gettings, Mark E.
2012-01-01
Measurements of in-situ magnetic susceptibility were compiled from mainly Precambrian crystalline basement rocks beneath the Colorado Plateau and ranges in Arizona, Colorado, and New Mexico. The susceptibility meter used measures about 30 cm3 of rock and measures variations in the modal distribution of magnetic minerals that form a minor component volumetrically in these coarsely crystalline granitic to granodioritic rocks. Recent measurements include 50–150 measurements on each outcrop, and show that the distribution of magnetic susceptibilities is highly variable, multimodal and strongly non-Gaussian. Although the distribution of magnetic susceptibility is well known to be multifractal, the small number of data points at an outcrop precludes calculation of the multifractal spectrum by conventional methods. Instead, a brute force approach was adopted using multiplicative cascade models to fit the outcrop scale variability of magnetic minerals. Model segment proportion and length parameters resulted in 26 676 models to span parameter space. Distributions at each outcrop were normalized to unity magnetic susceptibility and added to compare all data for a rock body accounting for variations in petrology and alteration. Once the best-fitting model was found, the equation relating the segment proportion and length parameters was solved numerically to yield the multifractal spectrum estimate. For the best fits, the relative density (the proportion divided by the segment length) of one segment tends to be dominant and the other two densities are smaller and nearly equal. No other consistent relationships between the best fit parameters were identified. The multifractal spectrum estimates appear to distinguish between metamorphic gneiss sites and sites on plutons, even if the plutons have been metamorphosed. In particular, rocks that have undergone multiple tectonic events tend to have a larger range of scaling exponents.
Contact-assisted protein structure modeling by global optimization in CASP11.
Joo, Keehyoung; Joung, InSuk; Cheng, Qianyi; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
We have applied the conformational space annealing method to the contact-assisted protein structure modeling in CASP11. For Tp targets, where predicted residue-residue contact information was provided, the contact energy term in the form of the Lorentzian function was implemented together with the physical energy terms used in our template-free modeling of proteins. Although we observed some structural improvement of Tp models over the models predicted without the Tp information, the improvement was not substantial on average. This is partly due to the inaccuracy of the provided contact information, where only about 18% of it was correct. For Ts targets, where the information of ambiguous NOE (Nuclear Overhauser Effect) restraints was provided, we formulated the modeling in terms of the two-tier optimization problem, which covers: (1) the assignment of NOE peaks and (2) the three-dimensional (3D) model generation based on the assigned NOEs. Although solving the problem in a direct manner appears to be intractable at first glance, we demonstrate through CASP11 that remarkably accurate protein 3D modeling is possible by brute force optimization of a relevant energy function. For 19 Ts targets of the average size of 224 residues, generated protein models were of about 3.6 Å Cα atom accuracy. Even greater structural improvement was observed when additional Tc contact information was provided. For 20 out of the total 24 Tc targets, we were able to generate protein structures which were better than the best model from the rest of the CASP11 groups in terms of GDT-TS. Proteins 2016; 84(Suppl 1):189-199. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
The emerging science of precision medicine and pharmacogenomics for Parkinson's disease.
Payami, Haydeh
2017-08-01
Current therapies for Parkinson's disease are problematic because they are symptomatic and have adverse effects. New drugs have failed in clinical trials because of inadequate efficacy. At the core of the problem is trying to make one drug work for all Parkinson's disease patients, when we know this premise is wrong because (1) Parkinson's disease is not a single disease, and (2) no two individuals have the same biological makeup. Precision medicine is the goal to strive for, but we are only at the beginning stages of building the infrastructure for one of the most complex projects in the history of science, and it will be a long time before Parkinson's disease reaps the benefits. Pharmacogenomics, a cornerstone of precision medicine, has already proven successful for many conditions and could also propel drug discovery and improve treatment for Parkinson's disease. To make progress in the pharmacogenomics of Parkinson's disease, we need to change course from small inconclusive candidate gene studies to large-scale rigorously planned genome-wide studies that capture the nuclear genome and the microbiome. Pharmacogenomic studies must use homogenous subtypes of Parkinson's disease or apply the brute force of statistical power to overcome heterogeneity, which will require large sample sizes achievable only via internet-based methods and electronic databases. Large-scale pharmacogenomic studies, together with biomarker discovery efforts, will yield the knowledge necessary to design clinical trials with precision to alleviate confounding by disease heterogeneity and interindividual variability in drug response, two of the major impediments to successful drug discovery and effective treatment. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
NASA Astrophysics Data System (ADS)
Yang, Y.; Zhao, Y.
2017-12-01
To understand the differences and their origins of emission inventories based on various methods for the source, emissions of PM10, PM2.5, OC, BC, CH4, VOCs, CO, CO2, NOX, SO2 and NH3 from open biomass burning (OBB) in Yangtze River Delta (YRD) are calculated for 2005-2012 using three (bottom-up, FRP-based and constraining) approaches. The inter-annual trends in emissions with FRP-based and constraining methods are similar with the fire counts in 2005-2012, while that with bottom-up method is different. For most years, emissions of all species estimated with constraining method are smaller than those with bottom-up method (except for VOCs), while they are larger than those with FRP-based (except for EC, CH4 and NH3). Such discrepancies result mainly from different masses of crop residues burned in the field (CRBF) estimated in the three methods. Among the three methods, the simulated concentrations from chemistry transport modeling with the constrained emissions are the closest to available observations, implying the result from constraining method is the best estimation for OBB emissions. CO emissions in the three methods are compared with other studies. Similar temporal variations were found for the constrained emissions, FRP-based emissions, GFASv1.0 and GFEDv4.1s, with the largest and the lowest emissions estimated for 2012 and 2006, respectively. The constrained CO emissions in this study are smaller than those in other studies based on bottom-up method and larger than those based on burned area and FRP derived from satellite. The contributions of OBB to two particulate pollution events in 2010 and 2012 are analyzed with the brute-force method. The average contribution of OBB to PM10 mass concentrations in June 8-14 2012 was estimated at 38.9% (74.8 μg m-3), larger than that in June 17-24, 2010 at 23.6 % (38.5 μg m-3). Influences of diurnal curves and meteorology on air pollution caused by OBB are also evaluated, and the results suggest that air pollution caused by OBB will become heavier if the meteorological conditions are unfavorable, and that more attention should be paid to the supervision in night. Quantified with the Monte-Carlo simulation, the uncertainties of OBB emissions with constraining method are significantly lower than those with bottom-up or FRP-based methods.
NASA Astrophysics Data System (ADS)
Uranishi, Katsushige; Ikemori, Fumikazu; Nakatsubo, Ryohei; Shimadera, Hikari; Kondo, Akira; Kikutani, Yuki; Asano, Katsuyoshi; Sugata, Seiji
2017-10-01
This study presented a comparison approach with multiple source apportionment methods to identify which sectors of emission data have large biases. The source apportionment methods for the comparison approach included both receptor and chemical transport models, which are widely used to quantify the impacts of emission sources on fine particulate matter of less than 2.5 μm in diameter (PM2.5). We used daily chemical component concentration data in the year 2013, including data for water-soluble ions, elements, and carbonaceous species of PM2.5 at 11 sites in the Kinki-Tokai district in Japan in order to apply the Positive Matrix Factorization (PMF) model for the source apportionment. Seven PMF factors of PM2.5 were identified with the temporal and spatial variation patterns and also retained features of the sites. These factors comprised two types of secondary sulfate, road transportation, heavy oil combustion by ships, biomass burning, secondary nitrate, and soil and industrial dust, accounting for 46%, 17%, 7%, 14%, 13%, and 3% of the PM2.5, respectively. The multiple-site data enabled a comprehensive identification of the PM2.5 sources. For the same period, source contributions were estimated by air quality simulations using the Community Multiscale Air Quality model (CMAQ) with the brute-force method (BFM) for four source categories. Both models provided consistent results for the following three of the four source categories: secondary sulfates, road transportation, and heavy oil combustion sources. For these three target categories, the models' agreement was supported by the small differences and high correlations between the CMAQ/BFM- and PMF-estimated source contributions to the concentrations of PM2.5, SO42-, and EC. In contrast, contributions of the biomass burning sources apportioned by CMAQ/BFM were much lower than and little correlated with those captured by the PMF model, indicating large uncertainties in the biomass burning emissions used in the CMAQ simulations. Thus, this comparison approach using the two antithetical models enables us to identify which sectors of emission data have large biases for improvement of future air quality simulations.
Management of a stage-structured insect pest: an application of approximate optimization.
Hackett, Sean C; Bonsall, Michael B
2018-06-01
Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model. © 2018 by the Ecological Society of America.
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.
A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere
NASA Astrophysics Data System (ADS)
Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael
2017-05-01
We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.
The evolution of parental care in insects: A test of current hypotheses
Gilbert, James D J; Manica, Andrea
2015-01-01
Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced—for example, by caring for multiple broods—but rejecting the “enhanced fecundity” hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. PMID:25825047
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
In search of robust flood risk management alternatives for the Netherlands
NASA Astrophysics Data System (ADS)
Klijn, F.; Knoop, J. M.; Ligtvoet, W.; Mens, M. J. P.
2012-05-01
The Netherlands' policy for flood risk management is being revised in view of a sustainable development against a background of climate change, sea level rise and increasing socio-economic vulnerability to floods. This calls for a thorough policy analysis, which can only be adequate when there is agreement about the "framing" of the problem and about the strategic alternatives that should be taken into account. In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood risks, and defined a number of different strategic alternatives for flood risk management at the national level. These alternatives, ranging from flood protection by brute force to reduction of the vulnerability by spatial planning only, were compared with continuation of the current policy on a number of criteria, comprising costs, the reduction of fatality risk and economic risk, and their robustness in relation to uncertainties. We found that a change of policy away from conventional embankments towards gaining control over the flooding process by making the embankments unbreachable is attractive. By thus influencing exposure to flooding, the fatality risk can be effectively reduced at even lower net societal costs than by continuation of the present policy or by raising the protection standards where cost-effective.
Challenges in the development of very high resolution Earth System Models for climate science
NASA Astrophysics Data System (ADS)
Rasch, Philip J.; Xie, Shaocheng; Ma, Po-Lun; Lin, Wuyin; Wan, Hui; Qian, Yun
2017-04-01
The authors represent the 20+ members of the ACME atmosphere development team. The US Department of Energy (DOE) has, like many other organizations around the world, identified the need for an Earth System Model capable of rapid completion of decade to century length simulations at very high (vertical and horizontal) resolution with good climate fidelity. Two years ago DOE initiated a multi-institution effort called ACME (Accelerated Climate Modeling for Energy) to meet this an extraordinary challenge, targeting a model eventually capable of running at 10-25km horizontal and 20-400m vertical resolution through the troposphere on exascale computational platforms at speeds sufficient to complete 5+ simulated years per day. I will outline the challenges our team has encountered in development of the atmosphere component of this model, and the strategies we have been using for tuning and debugging a model that we can barely afford to run on today's computational platforms. These strategies include: 1) evaluation at lower resolutions; 2) ensembles of short simulations to explore parameter space, and perform rough tuning and evaluation; 3) use of regionally refined versions of the model for probing high resolution model behavior at less expense; 4) use of "auto-tuning" methodologies for model tuning; and 5) brute force long climate simulations.
High Performance Analytics with the R3-Cache
NASA Astrophysics Data System (ADS)
Eavis, Todd; Sayeed, Ruhan
Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.
Axicons, prisms and integrators: searching for simple laser beam shaping solutions
NASA Astrophysics Data System (ADS)
Lizotte, Todd
2010-08-01
Over the last thirty five years there have been many papers presented at numerous conferences and published within a host of optical journals. What is presented in many cases is either too exotic or technically challenging in practical application terms and it could be said both are testaments to the imagination of engineers and researchers. For many brute force laser processing applications such as paint stripping, large area ablation or general skiving of flex circuits, the opportunity to use a beam shaper that is inexpensive is a welcomed tool. Shaping the laser beam for less demanding applications, provides for a more uniform removal rate and increases the overall quality of the part being processed. It is a well known fact customers like their parts to look good. Many times, complex optical beam shaping techniques are considered because no one is aware of the historical solutions that have been lost to the ages. These complex solutions can range in price from 10,000 to 60,000 and require many months to design and fabricate. This paper will provide an overview of various beam shaping techniques that are both elegant and simple in concept and design. Optical techniques using axicons, prisms and reflective integrators will be discussed in an overview format.
NASA Astrophysics Data System (ADS)
Gad-El-Hak, M.
1996-11-01
Considering the extreme complexity of the turbulence problem in general and the unattainability of first-principles analytical solutions in particular, it is not surprising that controlling a turbulent flow remains a challenging task, mired in empiricism and unfulfilled promises and aspirations. Brute force suppression, or taming, of turbulence via active control strategies is always possible, but the penalty for doing so often exceeds any potential savings. The artifice is to achieve a desired effect with minimum energy expenditure. Spurred by the recent developments in chaos control, microfabrication and neural networks, efficient reactive control of turbulent flows, where the control input is optimally adjusted based on feedforward or feedback measurements, is now in the realm of the possible for future practical devices. But regardless of how the problem is approached, combating turbulence is always as arduous as the taming of the shrew. The former task will be emphasized during the oral presentation, but for this abstract we reflect on a short verse from the latter. From William Shakespeare's The Taming of the Shrew. Curtis (Petruchio's servant, in charge of his country house): Is she so hot a shrew as she's reported? Grumio (Petruchio's personal lackey): She was, good Curtis, before this frost. But thou know'st winter tames man, woman, and beast; for it hath tamed my old master, and my new mistress, and myself, fellow Curtis.
Scalable and Accurate SMT-based Model Checking of Data Flow Systems
2013-10-30
guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have
Operational Risk Preparedness: General George H. Thomas and the Franklin-Nashville Campaign
2014-05-22
monograph analyzes and compares thoughts on risk from multiple disciplines and viewpoints to develop a suitable definition and corresponding principles...sounds similar to Sun Tzu: " from the enemy’s character, from his institutions, the state of his affairs and his general situation, each side, using...changes through brute strength, but do not gain from change, they merely continue to exist. He therefore introduced the term antifragile—a system that
From brute luck to option luck? On genetics, justice, and moral responsibility in reproduction.
Denier, Yvonne
2010-04-01
The structure of our ethical experience depends, crucially, on a fundamental distinction between what we are responsible for doing or deciding and what is given to us. As such, the boundary between chance and choice is the spine of our conventional morality, and any serious shift in that boundary is thoroughly dislocating. Against this background, I analyze the way in which techniques of prenatal genetic diagnosis (PGD) pose such a fundamental challenge to our conventional ideas of justice and moral responsibility. After a short description of the situation, I first examine the influential luck egalitarian theory of justice, which is based on the distinction between choice and luck or, more specifically, between option luck and brute luck, and the way in which it would approach PGD (section II), followed by an analysis of the conceptual incoherencies (in section III) and moral problems (in section IV) that come with such an approach. Put shortly, the case of PGD shows that the luck egalitarian approach fails to express equal respect for the individual choices of people. The paradox of the matter is that by overemphasizing the fact of choice as such, without regard for the social framework in which they are being made, or for the fundamental and existential nature of particular choices-like choosing to have children and not to undergo PGD or not to abort a handicapped fetus-such choices actually become impossible.
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Walko, R. L.
2006-03-01
This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS), based on RAMS (Regional Atmospheric Modelling System), for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF) ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting), LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due to local and mesoscale forcing, the high resolution forecast (Hi-Res) has better performance compared to the ensemble mean for rainfall thresholds larger than 10mm but it tends to overestimate precipitation for lower amounts. This yields larger false alarms that have a detrimental effect on objective scores for lower thresholds. To exploit the advantages of a probabilistic forecast compared to a deterministic one, the relation between the ECMWF-EPS 700 hPa geopotential height spread and LEPS performance is analyzed. Results are promising even if additional studies are required.
Recognizing human actions by learning and matching shape-motion prototype trees.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2012-03-01
A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.
Time Series Discord Detection in Medical Data using a Parallel Relational Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel
Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less
A Survey of Image Encryption Algorithms
NASA Astrophysics Data System (ADS)
Kumari, Manju; Gupta, Shailender; Sardana, Pranshul
2017-12-01
Security of data/images is one of the crucial aspects in the gigantic and still expanding domain of digital transfer. Encryption of images is one of the well known mechanisms to preserve confidentiality of images over a reliable unrestricted public media. This medium is vulnerable to attacks and hence efficient encryption algorithms are necessity for secure data transfer. Various techniques have been proposed in literature till date, each have an edge over the other, to catch-up to the ever growing need of security. This paper is an effort to compare the most popular techniques available on the basis of various performance metrics like differential, statistical and quantitative attacks analysis. To measure the efficacy, all the modern and grown-up techniques are implemented in MATLAB-2015. The results show that the chaotic schemes used in the study provide highly scrambled encrypted images having uniform histogram distribution. In addition, the encrypted images provided very less degree of correlation coefficient values in horizontal, vertical and diagonal directions, proving their resistance against statistical attacks. In addition, these schemes are able to resist differential attacks as these showed a high sensitivity for the initial conditions, i.e. pixel and key values. Finally, the schemes provide a large key spacing, hence can resist the brute force attacks, and provided a very less computational time for image encryption/decryption in comparison to other schemes available in literature.
1965-04-16
This photograph depicts a dramatic view of the first test firing of all five F-1 engines for the Saturn V S-IC stage at the Marshall Space Flight Center. The testing lasted a full duration of 6.5 seconds. It also marked the first test performed in the new S-IC static test stand and the first test using the new control blockhouse. The S-IC stage is the first stage, or booster, of a 364-foot long rocket that ultimately took astronauts to the Moon. Operating at maximum power, all five of the engines produced 7,500,000 pounds of thrust. Required to hold down the brute force of a 7,500,000-pound thrust, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and cement, planted down to bedrock 40 feet below ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the up position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. When the Saturn V S-IC first stage was placed upright in the stand , the five F-1 engine nozzles pointed downward on a 1,900 ton, water-cooled deflector. To prevent melting damage, water was sprayed through small holes in the deflector at the rate 320,000 gallons per minute.
Heterozygote PCR product melting curve prediction.
Dwight, Zachary L; Palais, Robert; Kent, Jana; Wittwer, Carl T
2014-03-01
Melting curve prediction of PCR products is limited to perfectly complementary strands. Multiple domains are calculated by recursive nearest neighbor thermodynamics. However, the melting curve of an amplicon containing a heterozygous single-nucleotide variant (SNV) after PCR is the composite of four duplexes: two matched homoduplexes and two mismatched heteroduplexes. To better predict the shape of composite heterozygote melting curves, 52 experimental curves were compared with brute force in silico predictions varying two parameters simultaneously: the relative contribution of heteroduplex products and an ionic scaling factor for mismatched tetrads. Heteroduplex products contributed 25.7 ± 6.7% to the composite melting curve, varying from 23%-28% for different SNV classes. The effect of ions on mismatch tetrads scaled to 76%-96% of normal (depending on SNV class) and averaged 88 ± 16.4%. Based on uMelt (www.dna.utah.edu/umelt/umelt.html) with an expanded nearest neighbor thermodynamic set that includes mismatched base pairs, uMelt HETS calculates helicity as a function of temperature for homoduplex and heteroduplex products, as well as the composite curve expected from heterozygotes. It is an interactive Web tool for efficient genotyping design, heterozygote melting curve prediction, and quality control of melting curve experiments. The application was developed in Actionscript and can be found online at http://www.dna.utah.edu/hets/. © 2013 WILEY PERIODICALS, INC.
The evolution of parental care in insects: A test of current hypotheses.
Gilbert, James D J; Manica, Andrea
2015-05-01
Which sex should care for offspring is a fundamental question in evolution. Invertebrates, and insects in particular, show some of the most diverse kinds of parental care of all animals, but to date there has been no broad comparative study of the evolution of parental care in this group. Here, we test existing hypotheses of insect parental care evolution using a literature-compiled phylogeny of over 2000 species. To address substantial uncertainty in the insect phylogeny, we use a brute force approach based on multiple random resolutions of uncertain nodes. The main transitions were between no care (the probable ancestral state) and female care. Male care evolved exclusively from no care, supporting models where mating opportunity costs for caring males are reduced-for example, by caring for multiple broods-but rejecting the "enhanced fecundity" hypothesis that male care is favored because it allows females to avoid care costs. Biparental care largely arose by males joining caring females, and was more labile in Holometabola than in Hemimetabola. Insect care evolution most closely resembled amphibian care in general trajectory. Integrating these findings with the wealth of life history and ecological data in insects will allow testing of a rich vein of existing hypotheses. © 2015 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Strategies for resolving conflict: their functional and dysfunctional sides.
Stimac, M
1982-01-01
Conflict in the workplace can have a beneficial effect. That is if appropriately resolved, it plays an important part in effective problem solving, according to author Michele Stimac, associate dean, curriculum and instruction, and professor at Pepperdine University Graduate School of Education and Psychology. She advocates confrontation--by way of negotiation rather than brute force--as the best way to resolve conflict, heal wounds, reconcile the parties involved, and give the resolution long life. But she adds that if a person who has though through when, where, and how to confront someone foresees only disaster, avoidance is the best path to take. The emphasis here is on strategy. Avoiding confrontation, for example, is not a strategic move unless it is backed by considered judgment. Stimac lays out these basic tenets for engaging in sound negotiation: (1) The confrontation should take place in neutral territory. (2) The parties should actively listen to each other. (3) Each should assert his or her right to fair treatment. (4) Each must allow the other to retain his or her dignity. (5) The parties should seek a consensus on the issues inconflict, their resolution, and the means of reducing any tension that results from the resolution. (6) The parties should exhibit a spirit of give and take--that is, of compromise. (7) They should seek satisfaction for all involved.
Testing the mutual information expansion of entropy with multivariate Gaussian distributions.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2017-12-14
The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
A patch-based pseudo-CT approach for MRI-only radiotherapy in the pelvis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreasen, Daniel, E-mail: dana@dtu.dk
Purpose: In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T{sub 1}-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairsmore » and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level. Methods: The data consisted of CT and T{sub 1}-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAE{sub vox}) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose–volume histogram (DVH) point deviations and γ-index analysis. Results: The patch-based approach had an average MAE{sub vox} of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min. Conclusions: The authors showed that a patch-based method based on affine registrations and T{sub 1}-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.« less
Zerbini, Francesca; Zanella, Ilaria; Fraccascia, Davide; König, Enrico; Irene, Carmela; Frattini, Luca F; Tomasi, Michele; Fantappiè, Laura; Ganfini, Luisa; Caproni, Elena; Parri, Matteo; Grandi, Alberto; Grandi, Guido
2017-04-24
The exploitation of the CRISPR/Cas9 machinery coupled to lambda (λ) recombinase-mediated homologous recombination (recombineering) is becoming the method of choice for genome editing in E. coli. First proposed by Jiang and co-workers, the strategy has been subsequently fine-tuned by several authors who demonstrated, by using few selected loci, that the efficiency of mutagenesis (number of mutant colonies over total number of colonies analyzed) can be extremely high (up to 100%). However, from published data it is difficult to appreciate the robustness of the technology, defined as the number of successfully mutated loci over the total number of targeted loci. This information is particularly relevant in high-throughput genome editing, where repetition of experiments to rescue missing mutants would be impractical. This work describes a "brute force" validation activity, which culminated in the definition of a robust, simple and rapid protocol for single or multiple gene deletions. We first set up our own version of the CRISPR/Cas9 protocol and then we evaluated the mutagenesis efficiency by changing different parameters including sequence of guide RNAs, length and concentration of donor DNAs, and use of single stranded and double stranded donor DNAs. We then validated the optimized conditions targeting 78 "dispensable" genes. This work led to the definition of a protocol, featuring the use of double stranded synthetic donor DNAs, which guarantees mutagenesis efficiencies consistently higher than 10% and a robustness of 100%. The procedure can be applied also for simultaneous gene deletions. This work defines for the first time the robustness of a CRISPR/Cas9-based protocol based on a large sample size. Since the technical solutions here proposed can be applied to other similar procedures, the data could be of general interest for the scientific community working on bacterial genome editing and, in particular, for those involved in synthetic biology projects requiring high throughput procedures.
NASA Astrophysics Data System (ADS)
Weiss, C. J.; Beskardes, G. D.; Everett, M. E.
2016-12-01
In this presentation we review the observational evidence for anomalous electromagnetic diffusion in near-surface geophysical exploration and how such evidence is consistent with a detailed, spatially-correlated geologic medium. To date, the inference of multi-scale geologic correlation is drawn from two independent methods of data analysis. The first of which is analogous to seismic move-out, where the arrival time of an electromagnetic pulse is plotted as a function of transmitter/receiver separation. The "anomalous" diffusion is evident by the fractional-order power law behavior of these arrival times, with an exponent value between unity (pure diffusion) and 2 (lossless wave propagation). The second line of evidence comes from spectral analysis of small-scale fluctuations in electromagnetic profile data which cannot be explained in terms of instrument, user or random error. Rather, the power-law behavior of the spectral content of these signals (i.e., power versus wavenumber) and their increments reveals them to lie in a class of signals with correlations over multiple length scales, a class of signals known formally as fractional Brownian motion. Numerical results over simulated geology with correlated electrical texture - representative of, for example, fractures, sedimentary bedding or metamorphic lineation - are consistent with the (albeit limited, but growing) observational data, suggesting a possible mechanism and modeling approach for a more realistic geology. Furthermore, we show how similar simulated results can arise from a modeling approach where geologic texture is economically captured by a modified diffusion equation containing exotic, but manageable, fractional derivatives. These derivatives arise physically from the generalized convolutional form for the electromagnetic constitutive laws and thus have merit beyond mere mathematical convenience. In short, we are zeroing in on the anomalous, fractional diffusion limit from two converging directions: a zooming down of the macroscopic (fractional derivative) view; and, a heuristic homogenization of the atomistic (brute force discretization) view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Viswanathan, A; Cormack, R
2015-06-15
Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
NASA Astrophysics Data System (ADS)
Flores, José L.; Karam, Hugo A.; Marques Filho, Edson P.; Pereira Filho, Augusto J.
2016-02-01
The main goal of this paper is to estimate a set of optimal seasonal, daily, and hourly values of atmospheric turbidity and surface radiative parameters Ångström's turbidity coefficient ( β), Ångström's wavelength exponent ( α), aerosol single scattering albedo ( ω o ), forward scatterance ( F c ) and average surface albedo ( ρ g ), using the Brute Force multidimensional minimization method to minimize the difference between measured and simulated solar irradiance components, expressed as cost functions. In order to simulate the components of short-wave solar irradiance (direct, diffuse and global) for clear sky conditions, incidents on a horizontal surface in the Metropolitan Area of Rio de Janeiro (MARJ), Brazil (22° 51' 27″ S, 43° 13' 58″ W), we use two parameterized broadband solar irradiance models, called CPCR2 and Iqbal C, based on synoptic information. The meteorological variables such as precipitable water ( u w ) and ozone concentration ( u o ) required by the broadband solar models were obtained from moderate-resolution imaging spectroradiometer (MODIS) sensor on Terra and Aqua NASA platforms. For the implementation and validation processes, we use global and diffuse solar irradiance data measured by the radiometric platform of LabMiM, located in the north area of the MARJ. The data were measured between the years 2010 and 2012 at 1-min intervals. The performance of solar irradiance models using optimal parameters was evaluated with several quantitative statistical indicators and a subset of measured solar irradiance data. Some daily results for Ångström's wavelength exponent α were compared with Ångström's parameter (440-870 nm) values obtained by aerosol robotic network (AERONET) for 11 days, showing an acceptable level of agreement. Results for Ångström's turbidity coefficient β, associated with the amount of aerosols in the atmosphere, show a seasonal pattern according with increased precipitation during summer months (December-February) in the MARJ.
PP and PS interferometric images of near-seafloor sediments
Haines, S.S.
2011-01-01
I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.
Resolved granular debris-flow simulations with a coupled SPH-DCDEM model
NASA Astrophysics Data System (ADS)
Birjukovs Canelas, Ricardo; Domínguez, José M.; Crespo, Alejandro J. C.; Gómez-Gesteira, Moncho; Ferreira, Rui M. L.
2016-04-01
Debris flows represent some of the most relevant phenomena in geomorphological events. Due to the potential destructiveness of such flows, they are the target of a vast amount of research (Takahashi, 2007 and references therein). A complete description of the internal processes of a debris-flow is however still an elusive achievement, explained by the difficulty of accurately measuring important quantities in these flows and developing a comprehensive, generalized theoretical framework capable of describing them. This work addresses the need for a numerical model applicable to granular-fluid mixtures featuring high spatial and temporal resolution, thus capable of resolving the motion of individual particles, including all interparticle contacts. This corresponds to a brute-force approach: by applying simple interaction laws at local scales the macro-scale properties of the flow should be recovered by upscaling. This methodology effectively bypasses the complexity of modelling the intermediate scales by resolving them directly. The only caveat is the need of high performance computing, a demanding but engaging research challenge. The DualSPHysics meshless numerical implementation, based on Smoothed Particle Hydrodynamics (SPH), is expanded with a Distributed Contact Discrete Element Method (DCDEM) in order to explicitly solve the fluid and the solid phase. The model numerically solves the Navier-Stokes and continuity equations for the liquid phase and Newton's motion equations for solid bodies. The interactions between solids are modelled with classical DEM approaches (Kruggel-Emden et al, 2007). Among other validation tests, an experimental set-up for stony debris flows in a slit check dam is reproduced numerically, where solid material is introduced trough a hopper assuring a constant solid discharge for the considered time interval. With each sediment particle undergoing tens of possible contacts, several thousand time-evolving contacts are efficiently treated. Fully periodic boundary conditions allow for the recirculation of the material. The results, comprising mainly of retention curves, are in good agreement with the measurements, correctly reproducing the changes in efficiency with slit spacing and effective density. Ackownledgements: Project RECI/ECM-HID/0371/2012, funded by the Portuguese Foundation for Science and Technology (FCT), has partially supported this work. It was also partially funded by Xunta de Galicia under project Programa de Consolidacion e Estructuracion de Unidades de Investigacion Competitivas (Grupos de Referencia Competitiva), financed by European Regional Development Fund (FEDER) and by Ministerio de Economia y Competitividad under de Project BIA2012-38676-C03-03. References Takahashi, T. Debris Flow, Mechanics, Prediction and Countermeasures. Taylor and Francis, 2007 Kruggel-Emden, H.; Simsek, E.; Rickelt, S.; Wirtz, S. & Scherer, V. Review and extension of normal force models for the Discrete Element Method. Powder Technology , 2007, 171, 157 - 173
Toward an Integration of Deep Learning and Neuroscience
Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P.
2016-01-01
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. PMID:27683554
Wang, Yong; Tang, Chun; Wang, Erkang; Wang, Jin
2012-01-01
An increasing number of biological machines have been revealed to have more than two macroscopic states. Quantifying the underlying multiple-basin functional landscape is essential for understanding their functions. However, the present models seem to be insufficient to describe such multiple-state systems. To meet this challenge, we have developed a coarse grained triple-basin structure-based model with implicit ligand. Based on our model, the constructed functional landscape is sufficiently sampled by the brute-force molecular dynamics simulation. We explored maltose-binding protein (MBP) which undergoes large-scale domain motion between open, apo-closed (partially closed) and holo-closed (fully closed) states responding to ligand binding. We revealed an underlying mechanism whereby major induced fit and minor population shift pathways co-exist by quantitative flux analysis. We found that the hinge regions play an important role in the functional dynamics as well as that increases in its flexibility promote population shifts. This finding provides a theoretical explanation of the mechanistic discrepancies in PBP protein family. We also found a functional “backtracking” behavior that favors conformational change. We further explored the underlying folding landscape in response to ligand binding. Consistent with earlier experimental findings, the presence of ligand increases the cooperativity and stability of MBP. This work provides the first study to explore the folding dynamics and functional dynamics under the same theoretical framework using our triple-basin functional model. PMID:22532792
Symmetric encryption algorithms using chaotic and non-chaotic generators: A review
Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.
2015-01-01
This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561
GEMINI: a computationally-efficient search engine for large gene expression datasets.
DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick
2016-02-24
Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.
Community-aware task allocation for social networked multiagent systems.
Wang, Wanyuan; Jiang, Yichuan
2014-09-01
In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.
Reference ability neural networks and behavioral performance across the adult life span.
Habeck, Christian; Eich, Teal; Razlighi, Ray; Gazes, Yunglin; Stern, Yaakov
2018-05-15
To better understand the impact of aging, along with other demographic and brain health variables, on the neural networks that support different aspects of cognitive performance, we applied a brute-force search technique based on Principal Components Analysis to derive 4 corresponding spatial covariance patterns (termed Reference Ability Neural Networks -RANNs) from a large sample of participants across the age range. 255 clinically healthy, community-dwelling adults, aged 20-77, underwent fMRI while performing 12 tasks, 3 tasks for each of the following cognitive reference abilities: Episodic Memory, Reasoning, Perceptual Speed, and Vocabulary. The derived RANNs (1) showed selective activation to their specific cognitive domain and (2) correlated with behavioral performance. Quasi out-of-sample replication with Monte-Carlo 5-fold cross validation was built into our approach, and all patterns indicated their corresponding reference ability and predicted performance in held-out data to a degree significantly greater than chance level. RANN-pattern expression for Episodic Memory, Reasoning and Vocabulary were associated selectively with age, while the pattern for Perceptual Speed showed no such age-related influences. For each participant we also looked at residual activity unaccounted for by the RANN-pattern derived for the cognitive reference ability. Higher residual activity was associated with poorer brain-structural health and older age, but -apart from Vocabulary-not with cognitive performance, indicating that older participants with worse brain-structural health might recruit alternative neural resources to maintain performance levels. Copyright © 2018 Elsevier Inc. All rights reserved.
Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A; Al-Muhtadi, Jalal; Rodrigues, Joel J P C; Khalil, Mohammed Sayim; Ali Ahmed, Adel
2016-03-31
The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks.
Saleem, Kashif; Derhab, Abdelouahid; Orgun, Mehmet A.; Al-Muhtadi, Jalal; Rodrigues, Joel J. P. C.; Khalil, Mohammed Sayim; Ali Ahmed, Adel
2016-01-01
The deployment of intelligent remote surveillance systems depends on wireless sensor networks (WSNs) composed of various miniature resource-constrained wireless sensor nodes. The development of routing protocols for WSNs is a major challenge because of their severe resource constraints, ad hoc topology and dynamic nature. Among those proposed routing protocols, the biology-inspired self-organized secure autonomous routing protocol (BIOSARP) involves an artificial immune system (AIS) that requires a certain amount of time to build up knowledge of neighboring nodes. The AIS algorithm uses this knowledge to distinguish between self and non-self neighboring nodes. The knowledge-building phase is a critical period in the WSN lifespan and requires active security measures. This paper proposes an enhanced BIOSARP (E-BIOSARP) that incorporates a random key encryption mechanism in a cost-effective manner to provide active security measures in WSNs. A detailed description of E-BIOSARP is presented, followed by an extensive security and performance analysis to demonstrate its efficiency. A scenario with E-BIOSARP is implemented in network simulator 2 (ns-2) and is populated with malicious nodes for analysis. Furthermore, E-BIOSARP is compared with state-of-the-art secure routing protocols in terms of processing time, delivery ratio, energy consumption, and packet overhead. The findings show that the proposed mechanism can efficiently protect WSNs from selective forwarding, brute-force or exhaustive key search, spoofing, eavesdropping, replaying or altering of routing information, cloning, acknowledgment spoofing, HELLO flood attacks, and Sybil attacks. PMID:27043572
Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E
2016-08-12
Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.
Predictive ecology: systems approaches
Evans, Matthew R.; Norris, Ken J.; Benton, Tim G.
2012-01-01
The world is experiencing significant, largely anthropogenically induced, environmental change. This will impact on the biological world and we need to be able to forecast its effects. In order to produce such forecasts, ecology needs to become more predictive—to develop the ability to understand how ecological systems will behave in future, changed, conditions. Further development of process-based models is required to allow such predictions to be made. Critical to the development of such models will be achieving a balance between the brute-force approach that naively attempts to include everything, and over simplification that throws out important heterogeneities at various levels. Central to this will be the recognition that individuals are the elementary particles of all ecological systems. As such it will be necessary to understand the effect of evolution on ecological systems, particularly when exposed to environmental change. However, insights from evolutionary biology will help the development of models even when data may be sparse. Process-based models are more common, and are used for forecasting, in other disciplines, e.g. climatology and molecular systems biology. Tools and techniques developed in these endeavours can be appropriated into ecological modelling, but it will also be necessary to develop the science of ecoinformatics along with approaches specific to ecological problems. The impetus for this effort should come from the demand coming from society to understand the effects of environmental change on the world and what might be performed to mitigate or adapt to them. PMID:22144379
Through thick and thin: tuning the threshold voltage in organic field-effect transistors.
Martínez Hardigree, Josué F; Katz, Howard E
2014-04-15
Organic semiconductors (OSCs) constitute a class of organic materials containing densely packed, overlapping conjugated molecular moieties that enable charge carrier transport. Their unique optical, electrical, and magnetic properties have been investigated for use in next-generation electronic devices, from roll-up displays and radiofrequency identification (RFID) to biological sensors. The organic field-effect transistor (OFET) is the key active element for many of these applications, but the high values, poor definition, and long-term instability of the threshold voltage (V(T)) in OFETs remain barriers to realization of their full potential because the power and control circuitry necessary to compensate for overvoltages and drifting set points decrease OFET practicality. The drifting phenomenon has been widely observed and generally termed "bias stress." Research on the mechanisms responsible for this poor V(T) control has revealed a strong dependence on the physical order and chemical makeup of the interfaces between OSCs and adjacent materials in the OFET architecture. In this Account, we review the state of the art for tuning OFET performance via chemical designs and physical processes that manipulate V(T). This parameter gets to the heart of OFET operation, as it determines the voltage regimes where OFETs are either ON or OFF, the basis for the logical function of the devices. One obvious way to decrease the magnitude and variability of V(T) is to work with thinner and higher permittivity gate dielectrics. From the perspective of interfacial engineering, we evaluate various methods that we and others have developed, from electrostatic poling of gate dielectrics to molecular design of substituted alkyl chains. Corona charging of dielectric surfaces, a method for charging the surface of an insulating material using a constant high-voltage field, is a brute force means of shifting the effective gate voltage applied to a gate dielectric. A gentler and more direct method is to apply surface voltage to dielectric interfaces by direct contact or postprocess biasing; these methods could also be adapted for high throughput printing sequences. Dielectric hydrophobicity is an important chemical property determining the stability of the surface charges. Functional organic monolayers applied to dielectrics, using the surface attachment chemistry made available from "self-assembled" monolayer chemistry, provide local electric fields without any biasing process at all. To the extent that the monolayer molecules can be printed, these are also suitable for high throughput processes. Finally, we briefly consider V(T) control in the context of device integration and reliability, such as the role of contact resistance in affecting this parameter.
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City
NASA Astrophysics Data System (ADS)
Zavala, M.; Lei, W.; Molina, M. J.; Molina, L. T.
2009-01-01
The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.
Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City
NASA Astrophysics Data System (ADS)
Zavala, M.; Lei, W. F.; Molina, M. J.; Molina, L. T.
2008-08-01
The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.
A statistical approach to nuclear fuel design and performance
NASA Astrophysics Data System (ADS)
Cunning, Travis Andrew
As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance, with an average sensitivity index of 48.93% on key output quantities. Pellet grain size and dish depth are also significant contributors, at 31.53% and 13.46%, respectively. A traditional limit of operating envelope case is also evaluated. This case produces output values that exceed the maximum values observed during the 105 Monte Carlo trials for all output quantities of interest. In many cases the difference between the predictions of the two methods is very prominent, and the highly conservative nature of the deterministic approach is demonstrated. A reliability analysis of CANDU fuel manufacturing parametric data, specifically pertaining to the quantification of fuel performance margins, has not been conducted previously. Key Words: CANDU, nuclear fuel, Cameco, fuel manufacturing, fuel modelling, fuel performance, fuel reliability, ELESTRES, ELOCA, dimensional reduction methods, global sensitivity analysis, deterministic safety analysis, probabilistic safety analysis.
NASA Astrophysics Data System (ADS)
Rodrigo Rodríguez Cardozo, Félix; Hjörleifsdóttir, Vala
2015-04-01
One important ingredient in the study of the complex active tectonics in Mexico is the analysis of earthquake focal mechanisms, or the seismic moment tensor. They can be determined trough the calculation of Green functions and subsequent inversion for moment-tensor parameters. However, this calculation is gets progressively more difficult as the magnitude of the earthquakes decreases. Large earthquakes excite waves of longer periods that interact weakly with laterally heterogeneities in the crust. For these earthquakes, using 1D velocity models to compute the Greens fucntions works well. The opposite occurs for smaller and intermediate sized events, where the relatively shorter periods excited interact strongly with lateral heterogeneities in the crust and upper mantle and requires more specific or regional 3D models. In this study, we calculate Greens functions for earthquakes in Mexico using a laterally heterogeneous seismic wave speed model, comprised of mantle model S362ANI (Kustowski et al 2008) and crustal model CRUST 2.0 (Bassin et al 1990). Subsequently, we invert the observed seismograms for the seismic moment tensor using a method developed by Liu et al (2004) an implemented by Óscar de La Vega (2014) for earthquakes in Mexico. By following a brute force approach, in which we include all observed Rayleigh and Love waves of the Mexican National Seismic Network (Servicio Sismológico Naciona, SSN), we obtain reliable focal mechanisms for events that excite a considerable amount of low frequency waves (Mw > 4.8). However, we are not able to consistently estimate focal mechanisms for smaller events using this method, due to high noise levels in many of the records. Excluding the noisy records, or noisy parts of the records manually, requires interactive edition of the data, using an efficient tool for the editing. Therefore, we developed a graphical user interface (GUI), based on python and the python library ObsPy, that allows the edition of observed and synthetic seismograms data such as signal filtering, choosing and disregarding traces and manual adjustment of time windows, to only include segments where the noise are excluded as much as possible. Subsequently, we invert for the seismic moment tensor of events of variable magnitude in the Mexican territory and compare the results to those obtained by other methods. In this presentation we introduce the software and present the results from the moment-tensor inversions.
Plasmon hybridization in complex metallic nanostructures
NASA Astrophysics Data System (ADS)
Hao, Feng
With Plasmon Hybridization (PH) and Finite-Difference Time-Domain (FDTD) method, we theoretically investigated the optical properties of some complex metallic nanostructures (coupled nanoparticle/wire, nanostars, nanorings and combined ring/disk nanocavity systems). We applied the analytical formulism of PH studying the plasmonic coupling of a spherical metallic nanoparticle and an infinite long cylindrical nanowire. The plasmon resonance of the coupled system is shown shifted in frequency, which highly depends on the polarization of incident light relative to the geometry of the structure. We also showed the nanoparticle serves as an efficient antenna coupling the electromagnetic radiation into the low-energy propagating wire plasmons. We performed an experimental and theoretical analysis of the optical properties of gold nanorings with different sizes and cross sections. For light polarized parallel to the ring, the optical spectrum sensitively depends on the incident angle. When light incidence is normal to the ring, two dipolar resonance is observed. As the incident light is titled, some previously dark mulipolar plasmon resonances will be excited as a consequence of the retardation. The concept of plasmon hybridization is combined with the power of brute-force numerical methods to understand the plasmonic properties of some very complicated nanostructures. We showed the plasmons of a gold nanostar are a result of hybridization of the plasmons of the core and the tips of the particle. The core serves as a nanoantenna, dramatically enhanced the optical spectrum and the field enhancement of the nanostar. We also applied this method analyzing the plasmonic modes of a nanocavity structure composed of a nanodisk with a surrounding minoring. For the concentric combination, we showed the nature of the plasmon modes can be understood as the plasmon hybrization of an individual ring and disk. The interation results in a blueshifted and broadened superradiant antibonding resonance and a redshifted and narrowed subradiant bonding plasmon. The electric field enhancement of the subradiant mode is significantly larger compared with its parent plasmon modes. For the nonconcentric ring/disk nanocavity, we showed the symmetry breaking caused the coupling betweem different multipolar plamons which results in a tunable Fano resonance. We also show the subradiant and the Fano resonances could be particularly useful in the LSPR and SERS sensing applications. In the thesis, we also presented an efficient dielectric function of gold and silver that is suitable for the FDTD simulations of the optical properties of various nanoparticles. The new dielectric function is able to account for the interband transition in gold and silver, and provides more precise calculations of the optical spectra compared to the Drude dielectric function that is normally used previously.
Allosteric effects of gold nanoparticles on human serum albumin.
Shao, Qing; Hall, Carol K
2017-01-07
The ability of nanoparticles to alter protein structure and dynamics plays an important role in their medical and biological applications. We investigate allosteric effects of gold nanoparticles on human serum albumin protein using molecular simulations. The extent to which bound nanoparticles influence the structure and dynamics of residues distant from the binding site is analyzed. The root mean square deviation, root mean square fluctuation and variation in the secondary structure of individual residues on a human serum albumin protein are calculated for four protein-gold nanoparticle binding complexes. The complexes are identified in a brute-force search process using an implicit-solvent coarse-grained model for proteins and nanoparticles. They are then converted to atomic resolution and their structural and dynamic properties are investigated using explicit-solvent atomistic molecular dynamics simulations. The results show that even though the albumin protein remains in a folded structure, the presence of a gold nanoparticle can cause more than 50% of the residues to decrease their flexibility significantly, and approximately 10% of the residues to change their secondary structure. These affected residues are distributed on the whole protein, even on regions that are distant from the nanoparticle. We analyze the changes in structure and flexibility of amino acid residues on a variety of binding sites on albumin and confirm that nanoparticles could allosterically affect the ability of albumin to bind fatty acids, thyroxin and metals. Our simulations suggest that allosteric effects must be considered when designing and deploying nanoparticles in medical and biological applications that depend on protein-nanoparticle interactions.
Penn, Alexandra S
2016-01-01
Understanding and manipulating bacterial biofilms is crucial in medicine, ecology and agriculture and has potential applications in bioproduction, bioremediation and bioenergy. Biofilms often resist standard therapies and the need to develop new means of intervention provides an opportunity to fundamentally rethink our strategies. Conventional approaches to working with biological systems are, for the most part, "brute force", attempting to effect control in an input and effort intensive manner and are often insufficient when dealing with the inherent non-linearity and complexity of living systems. Biological systems, by their very nature, are dynamic, adaptive and resilient and require management tools that interact with dynamic processes rather than inert artefacts. I present an overview of a novel engineering philosophy which aims to exploit rather than fight those properties, and hence provide a more efficient and robust alternative. Based on a combination of evolutionary theory and whole-systems design, its essence is what I will call systems aikido; the basic principle of aikido being to interact with the momentum of an attacker and redirect it with minimal energy expenditure, using the opponent's energy rather than one's own. In more conventional terms, this translates to a philosophy of equilibrium engineering, manipulating systems' own self-organisation and evolution so that the evolutionarily or dynamically stable state corresponds to a function which we require. I illustrate these ideas with a description of a proposed manipulation of environmental conditions to alter the stability of co-operation in the context of Pseudomonas aeruginosa biofilm infection of the cystic fibrosis lung.
Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jung Heon; Wu, Kesheng; Simon, Horst D.
2014-03-01
VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by themore » false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.« less
Known-plaintext attack on the double phase encoding and its implementation with parallel hardware
NASA Astrophysics Data System (ADS)
Wei, Hengzheng; Peng, Xiang; Liu, Haitao; Feng, Songlin; Gao, Bruce Z.
2008-03-01
A known-plaintext attack on the double phase encryption scheme implemented with parallel hardware is presented. The double random phase encoding (DRPE) is one of the most representative optical cryptosystems developed in mid of 90's and derives quite a few variants since then. Although the DRPE encryption system has a strong power resisting to a brute-force attack, the inherent architecture of DRPE leaves a hidden trouble due to its linearity nature. Recently the real security strength of this opto-cryptosystem has been doubted and analyzed from the cryptanalysis point of view. In this presentation, we demonstrate that the optical cryptosystems based on DRPE architecture are vulnerable to known-plain text attack. With this attack the two encryption keys in the DRPE can be accessed with the help of the phase retrieval technique. In our approach, we adopt hybrid input-output algorithm (HIO) to recover the random phase key in the object domain and then infer the key in frequency domain. Only a plaintext-ciphertext pair is sufficient to create vulnerability. Moreover this attack does not need to select particular plaintext. The phase retrieval technique based on HIO is an iterative process performing Fourier transforms, so it fits very much into the hardware implementation of the digital signal processor (DSP). We make use of the high performance DSP to accomplish the known-plaintext attack. Compared with the software implementation, the speed of the hardware implementation is much fast. The performance of this DSP-based cryptanalysis system is also evaluated.
Quaternion normalization in spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.
1993-01-01
Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.
Automated design of genomic Southern blot probes
2010-01-01
Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to experimentally validate a number of these automated designs by Southern blotting. The majority of probes we tested performed well confirming our in silico prediction methodology and the general usefulness of the software for automated genomic Southern probe design. Conclusions Software and supplementary information are freely available at: http://www.genes2cognition.org/software/southern_blot PMID:20113467
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
Rethinking one of criminology’s ‘brute facts’: The age–crime curve and the crime drop in Scotland
Matthews, Ben; Minton, Jon
2017-01-01
Examining annual variation in the age–crime curve as a way to better understand the recent crime drop, this paper explores how the age distribution of convicted offending changed for men and women in Scotland between 1989 and 2011. This analysis employs shaded contour plots as a method of visualizing annual change in the age–crime curve. Similar to recent findings from the USA, we observed falling rates of convicted offending for young people, primarily owing to lower rates of convicted offending for young men. In contrast to the US literature we also find increases in the rate of convicted offending for those in their mid-twenties to mid-forties, which are relatively greater for women than men. Analysis of annual change shows different phases in the progression of these trends, with falls in prevalence during the 1990s reflecting lower rates of convictions for acquisitive crime, but falls between 2007 and 2011 being spread across multiple crime types. Explanations of the crime drop in Scotland and elsewhere must be able to account for different patterns of change across age, sex, crime type and time. PMID:29805319
Prospective Optimization with Limited Resources
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-01-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309
Assessing Complex Learning Objectives through Analytics
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Mead, C.; Buxner, S.; Semken, S. C.; Anbar, A. D.
2016-12-01
A significant obstacle to improving the quality of education is the lack of easy-to-use assessments of higher-order thinking. Most existing assessments focus on recall and understanding questions, which demonstrate lower-order thinking. Traditionally, higher-order thinking is assessed with practical tests and written responses, which are time-consuming to analyze and are not easily scalable. Computer-based learning environments offer the possibility of assessing such learning outcomes based on analysis of students' actions within an adaptive learning environment. Our fully online introductory science course, Habitable Worlds, uses an intelligent tutoring system that collects and responds to a range of behavioral data, including actions within the keystone project. This central project is a summative, game-like experience in which students synthesize and apply what they have learned throughout the course to identify and characterize a habitable planet from among hundreds of stars. Student performance is graded based on completion and accuracy, but two additional properties can be utilized to gauge higher-order thinking: (1) how efficient a student is with the virtual currency within the project and (2) how many of the optional milestones a student reached. In the project, students can use the currency to check their work and "unlock" convenience features. High-achieving students spend close to the minimum amount required to reach these goals, indicating a high-level of concept mastery and efficient methodology. Average students spend more, indicating effort, but lower mastery. Low-achieving students were more likely to spend very little, which indicates low effort. Differences on these metrics were statistically significant between all three of these populations. We interpret this as evidence that high-achieving students develop and apply efficient problem-solving skills as compared to lower-achieving student who use more brute-force approaches.
1961-08-14
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the test stand as of August 14, 1961.
1961-08-18
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the test stand as of August 18, 1961.
1963-01-14
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, depicts the progress of the stand as of January 14, 1963, with its four towers prominently rising.
1963-04-17
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photograph taken April 17, 1963, gives a look at the four tower legs of the S-IC test stand at their completed height.
1961-07-21
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken July 21, 1961, a worker can be seen inside the test stand work area with a jack hammer.
1963-11-20
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the progress of the S-IC test stand as of November 20, 1963.
1963-06-24
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken June 24, 1963, the four tower legs of the test stand can be seen at their maximum height.
1961-07-31
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken July 31, 1961, work is continued in the clearing of the test stand site.
1963-02-25
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photograph taken February 25, 1963, gives a close up look at two of the ever-growing four towers of the S-IC Test Stand.
1961-08-11
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the test stand as of August 11, 1961.
1963-05-07
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photograph, taken from ground level on May 7, 1963, gives a close look at one of the four towers legs of the S-IC test stand nearing its completed height.
1963-05-07
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photograph, taken May 7, 1963, gives a close look at the four concrete tower legs of the S-IC test stand at their completed height.
1961-07-21
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken July 21, 1961, workers can be seen inside the test stand work area clearing the site.
1963-09-18
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. This photograph taken September 18, 1963 shows a spherical hydrogen tank being constructed next to the S-IC test stand.
A Machine Learning Approach to Predicted Bathymetry
NASA Astrophysics Data System (ADS)
Wood, W. T.; Elmore, P. A.; Petry, F.
2017-12-01
Recent and on-going efforts have shown how machine learning (ML) techniques, incorporating more, and more disparate data than can be interpreted manually, can predict seafloor properties, with uncertainty, where they have not been measured directly. We examine here a ML approach to predicted bathymetry. Our approach employs a paradigm of global bathymetry as an integral component of global geology. From a marine geology and geophysics perspective the bathymetry is the thickness of one layer in an ensemble of layers that inter-relate to varying extents vertically and geospatially. The nature of the multidimensional relationships in these layers between bathymetry, gravity, magnetic field, age, and many other global measures is typically geospatially dependent and non-linear. The advantage of using ML is that these relationships need not be stated explicitly, nor do they need to be approximated with a transfer function - the machine learns them via the data. Fundamentally, ML operates by brute-force searching for multidimensional correlations between desired, but sparsely known data values (in this case water depth), and a multitude of (geologic) predictors. Predictors include quantities known extensively such as remotely sensed measurements (i.e. gravity and magnetics), distance from spreading ridge, trench etc., (and spatial statistics based on these quantities). Estimating bathymetry from an approximate transfer function is inherently model, as well as data limited - complex relationships are explicitly ruled out. The ML is a purely data-driven approach, so only the extent and quality of the available observations limit prediction accuracy. This allows for a system in which new data, of a wide variety of types, can be quickly and easily assimilated into updated bathymetry predictions with quantitative posterior uncertainties.
NASA Astrophysics Data System (ADS)
Fernandez, C. A.; Jung, H. B.; Shao, H.; Bonneville, A.; Heldebrant, D.; Hoyt, D.; Zhong, L.; Holladay, J.
2014-12-01
Cost-effective yet safe creation of high-permeability reservoirs inside deep crystalline bedrock is the primary challenge for the viability of enhanced geothermal systems and unconventional oil/gas recovery. Current reservoir stimulation processes utilize brute force (hydraulic pressures in the order of hundreds of bar) to create/propagate fractures in the bedrock. Such stimulation processes entail substantial economic costs ($3.3 million per reservoir as of 2011). Furthermore, the environmental impacts of reservoir stimulation are only recently being determined. Widespread concerns about the environmental contamination have resulted in a number of regulations for fracturing fluids advocating for greener fracturing processes. To reduce the costs and environmental impact of reservoir stimulation, we developed an environmentally friendly and recyclable hydraulic fracturing fluid that undergoes a controlled and large volume expansion with a simultaneous increase in viscosity triggered by CO2 at temperatures relevant for reservoir stimulation in Enhanced Geothermal System (EGS). The volume expansion, which will specifically occurs at EGS depths of interest, generates an exceptionally large mechanical stress in fracture networks of highly impermeable rock propagating fractures at effective stress an order of magnitude lower than current technology. This paper will concentrate on the presentation of this CO2-triggered expanding hydrogel formed from diluted aqueous solutions of polyallylamine (PAA). Aqueous PAA-CO2 mixtures also show significantly higher viscosities than conventional rheology modifiers at similar pressures and temperatures due to the cross-linking reaction of PAA with CO2, which was demonstrated by chemical speciation studies using in situ HP-HT 13C MAS-NMR. In addtion, PAA shows shear-thinning behavior, a critical advantage for the use of this fluid system in EGS reservoir stimulation. The high pressure/temperature experiments and their results as well as the CFD modeling are presented in a companion paper.
Approximating frustration scores in complex networks via perturbed Laplacian spectra
NASA Astrophysics Data System (ADS)
Savol, Andrej J.; Chennubhotla, Chakra S.
2015-12-01
Systems of many interacting components, as found in physics, biology, infrastructure, and the social sciences, are often modeled by simple networks of nodes and edges. The real-world systems frequently confront outside intervention or internal damage whose impact must be predicted or minimized, and such perturbations are then mimicked in the models by altering nodes or edges. This leads to the broad issue of how to best quantify changes in a model network after some type of perturbation. In the case of node removal there are many centrality metrics which associate a scalar quantity with the removed node, but it can be difficult to associate the quantities with some intuitive aspect of physical behavior in the network. This presents a serious hurdle to the application of network theory: real-world utility networks are rarely altered according to theoretic principles unless the kinetic impact on the network's users are fully appreciated beforehand. In pursuit of a kinetically interpretable centrality score, we discuss the f-score, or frustration score. Each f-score quantifies whether a selected node accelerates or inhibits global mean first passage times to a second, independently selected target node. We show that this is a natural way of revealing the dynamical importance of a node in some networks. After discussing merits of the f-score metric, we combine spectral and Laplacian matrix theory in order to quickly approximate the exact f-score values, which can otherwise be expensive to compute. Following tests on both synthetic and real medium-sized networks, we report f-score runtime improvements over exact brute force approaches in the range of 0 to 400 % with low error (<3 % ).
1963-10-10
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the progress of the S-IC test stand as of October 10, 1963. Kerosene storage tanks can be seen to the left.
Hierarchical Material Properties in Finite Element Analysis: The Oilfield Infrastructure Problem.
NASA Astrophysics Data System (ADS)
Weiss, C. J.; Wilson, G. A.
2017-12-01
Geophysical simulation of low-frequency electromagnetic signals within built environments such as urban centers and industrial landscapes facilities is a challenging computational problem because strong conductors (e.g., pipes, fences, rail lines, rebar, etc.) are not only highly conductive and/or magnetic relative to the surrounding geology, but they are very small in one or more of their physical length coordinates. Realistic modeling of such structures as idealized conductors has long been the standard approach; however this strategy carries with it computational burdens such as cumbersome implementation of internal boundary conditions, and limited flexibility for accommodating realistic geometries. Another standard approach is "brute force" discretization (often coupled with an equivalent medium model) whereby 100's of millions of voxels are used to represent these strong conductors, but at the cost of extreme computation times (and mesh design) for a simulation result when possible. To minimize these burdens, a new finite element scheme (Weiss, Geophysics, 2017) has been developed in which the material properties reside on a hierarchy of geometric simplicies (i.e., edges, facets and volumes) within an unstructured tetrahedral mesh. This allows thin sheet—like structures, such as subsurface fractures, to be economically represented by a connected set of triangular facets, for example, that freely conform to arbitrary "real world" geometries. The same holds thin pipe/wire-like structures, such as casings or pipelines. The hierarchical finite element scheme has been applied to problems in electro- and magnetostatics for oilfield problems where the elevated, but finite, conductivity and permeability of the steel-cased oil wells must be properly accounted for, yielding results that are otherwise unobtainable, with run times as low as a few 10s of seconds. Extension of the hierarchical finite element concept to broadband electromagnetics is presently underway, as are its implications for geophysical inversion.
Prospective Optimization with Limited Resources.
Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei
2015-09-01
The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.
Rigali, Sébastien; Anderssen, Sinaeda; Naômé, Aymeric; van Wezel, Gilles P
2018-01-05
The World Health Organization (WHO) describes antibiotic resistance as "one of the biggest threats to global health, food security, and development today", as the number of multi- and pan-resistant bacteria is rising dangerously. Acquired resistance phenomena also impair antifungals, antivirals, anti-cancer drug therapy, while herbicide resistance in weeds threatens the crop industry. On the positive side, it is likely that the chemical space of natural products goes far beyond what has currently been discovered. This idea is fueled by genome sequencing of microorganisms which unveiled numerous so-called cryptic biosynthetic gene clusters (BGCs), many of which are transcriptionally silent under laboratory culture conditions, and by the fact that most bacteria cannot yet be cultivated in the laboratory. However, brute force antibiotic discovery does not yield the same results as it did in the past, and researchers have had to develop creative strategies in order to unravel the hidden potential of microorganisms such as Streptomyces and other antibiotic-producing microorganisms. Identifying the cis elements and their corresponding transcription factors(s) involved in the control of BGCs through bioinformatic approaches is a promising strategy. Theoretically, we are a few 'clicks' away from unveiling the culturing conditions or genetic changes needed to activate the production of cryptic metabolites or increase the production yield of known compounds to make them economically viable. In this opinion article, we describe and illustrate the idea beyond 'cracking' the regulatory code for natural product discovery, by presenting a series of proofs of concept, and discuss what still should be achieved to increase the rate of success of this strategy. Copyright © 2018 Elsevier Inc. All rights reserved.
1961-09-07
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the S-IC test stand as of September 7, 1961.
1961-09-29
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 29, 1961, shows the progress of the concrete walls for the stand’s foundation. Some of the walls have been poured and some of the concrete forms have been removed.
Construction Progress of the S-IC Test Stand-Steel Reinforcements
NASA Technical Reports Server (NTRS)
1961-01-01
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army's Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 15, 1961, shows the installation of the reinforcing steel prior to the pouring of the concrete foundation walls.
1961-07-10
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken July 10, 1961, actual ground breaking has occurred for the S-IC test stand site.
1961-06-30
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this early construction photo, taken June 30, 1961, workers are involved in the survey and site preparation for the test stand.
1961-09-15
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 15, 1961, shows the installation of the reinforcing steel prior to the pouring of the concrete foundation walls.
1961-09-22
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 22, 1961, shows the progress of the concrete walls for the stand’s foundation. Some of the walls have been poured and some of the concrete forms have been removed.
1961-09-07
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the forms for the concrete foundation walls as of September 7, 1961.
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-01-01
Astronomical radio signals are subjected to phase dispersion while traveling through the interstellar medium. To optimally detect a short-duration signal within a frequency band, we have to precisely compensate for the unknown pulse dispersion, which is a computationally demanding task. We present the “fast dispersion measure transform” algorithm for optimal detection of such signals. Our algorithm has a low theoretical complexity of 2{N}f{N}t+{N}t{N}{{Δ }}{{log}}2({N}f), where Nf, Nt, and NΔ are the numbers of frequency bins, time bins, and dispersion measure bins, respectively. Unlike previously suggested fast algorithms, our algorithm conserves the sensitivity of brute-force dedispersion. Our tests indicate that this algorithm, running on a standard desktop computer and implemented in a high-level programming language, is already faster than the state-of-the-art dedispersion codes running on graphical processing units (GPUs). We also present a variant of the algorithm that can be efficiently implemented on GPUs. The latter algorithm’s computation and data-transport requirements are similar to those of a two-dimensional fast Fourier transform, indicating that incoherent dedispersion can now be considered a nonissue while planning future surveys. We further present a fast algorithm for sensitive detection of pulses shorter than the dispersive smearing limits of incoherent dedispersion. In typical cases, this algorithm is orders of magnitude faster than enumerating dispersion measures and coherently dedispersing by convolution. We analyze the computational complexity of pulsed signal searches by radio interferometers. We conclude that, using our suggested algorithms, maximally sensitive blind searches for dispersed pulses are feasible using existing facilities. We provide an implementation of these algorithms in Python and MATLAB.
Multi-pass Monte Carlo simulation method in nuclear transmutations.
Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M
2016-12-01
Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10 25 or 10 26 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10 25 . Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10 28 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
Et tu, Brute? Not Even Intracellular Mutualistic Symbionts Escape Horizontal Gene Transfer
2017-01-01
Many insect species maintain mutualistic relationships with endosymbiotic bacteria. In contrast to their free-living relatives, horizontal gene transfer (HGT) has traditionally been considered rare in long-term endosymbionts. Nevertheless, meta-omics exploration of certain symbiotic models has unveiled an increasing number of bacteria-bacteria and bacteria-host genetic transfers. The abundance and function of transferred loci suggest that HGT might play a major role in the evolution of the corresponding consortia, enhancing their adaptive value or buffering detrimental effects derived from the reductive evolution of endosymbionts’ genomes. Here, we comprehensively review the HGT cases recorded to date in insect-bacteria mutualistic consortia, and discuss their impact on the evolutionary success of these associations. PMID:28961177
Techniques of Force and Pressure Measurement in the Small Joints of the Wrist.
Schreck, Michael J; Kelly, Meghan; Canham, Colin D; Elfar, John C
2018-01-01
The alteration of forces across joints can result in instability and subsequent disability. Previous methods of force measurements such as pressure-sensitive films, load cells, and pressure-sensing transducers have been utilized to estimate biomechanical forces across joints and more recent studies have utilized a nondestructive method that allows for assessment of joint forces under ligamentous restraints. A comprehensive review of the literature was performed to explore the numerous biomechanical methods utilized to estimate intra-articular forces. Methods of biomechanical force measurements in joints are reviewed. Methods such as pressure-sensitive films, load cells, and pressure-sensing transducers require significant intra-articular disruption and thus may result in inaccurate measurements, especially in small joints such as those within the wrist and hand. Non-destructive methods of joint force measurements either utilizing distraction-based joint reaction force methods or finite element analysis may offer a more accurate assessment; however, given their recent inception, further studies are needed to improve and validate their use.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
25 CFR 170.605 - When may BIA use force account methods in the IRR Program?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false When may BIA use force account methods in the IRR Program... § 170.605 When may BIA use force account methods in the IRR Program? BIA may use force account methods... account project activities. ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-02-01
Trucks used for the delivery of coal have a relatively limited life because they must be specified for somewhat less than brute strength in order to achieve maximum payload within existing weight and length limitations. The major drive train components such as engine, transmission, and drive axles can last longer if rebuilt or remanufactured periodically. For that reason glider kits and other truck components ready for the installation of the drive chain are becoming increasingly popular. These kits have been available for many years, but were regarded only as a means of salvaging late model wrecks or burned trucks. Theirmore » recent acceptance as a means of periodic fleet updating is built around the low cost and immediate availability of remanufactured major components. The cost advantages in using these glider kits are discussed.« less
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Ba, Kaixian; Yu, Bin; Cao, Yuan; Zhu, Qixin; Zhao, Hualong
2016-05-01
Each joint of hydraulic drive quadruped robot is driven by the hydraulic drive unit (HDU), and the contacting between the robot foot end and the ground is complex and variable, which increases the difficulty of force control inevitably. In the recent years, although many scholars researched some control methods such as disturbance rejection control, parameter self-adaptive control, impedance control and so on, to improve the force control performance of HDU, the robustness of the force control still needs improving. Therefore, how to simulate the complex and variable load characteristics of the environment structure and how to ensure HDU having excellent force control performance with the complex and variable load characteristics are key issues to be solved in this paper. The force control system mathematic model of HDU is established by the mechanism modeling method, and the theoretical models of a novel force control compensation method and a load characteristics simulation method under different environment structures are derived, considering the dynamic characteristics of the load stiffness and the load damping under different environment structures. Then, simulation effects of the variable load stiffness and load damping under the step and sinusoidal load force are analyzed experimentally on the HDU force control performance test platform, which provides the foundation for the force control compensation experiment research. In addition, the optimized PID control parameters are designed to make the HDU have better force control performance with suitable load stiffness and load damping, under which the force control compensation method is introduced, and the robustness of the force control system with several constant load characteristics and the variable load characteristics respectively are comparatively analyzed by experiment. The research results indicate that if the load characteristics are known, the force control compensation method presented in this paper has positive compensation effects on the load characteristics variation, i.e., this method decreases the effects of the load characteristics variation on the force control performance and enhances the force control system robustness with the constant PID parameters, thereby, the online PID parameters tuning control method which is complex needs not be adopted. All the above research provides theoretical and experimental foundation for the force control method of the quadruped robot joints with high robustness.
Systems and methods of detecting force and stress using tetrapod nanocrystal
Choi, Charina L.; Koski, Kristie J.; Sivasankar, Sanjeevi; Alivisatos, A. Paul
2013-08-20
Systems and methods of detecting force on the nanoscale including methods for detecting force using a tetrapod nanocrystal by exposing the tetrapod nanocrystal to light, which produces a luminescent response by the tetrapod nanocrystal. The method continues with detecting a difference in the luminescent response by the tetrapod nanocrystal relative to a base luminescent response that indicates a force between a first and second medium or stresses or strains experienced within a material. Such systems and methods find use with biological systems to measure forces in biological events or interactions.
Geometrical force constraint method for vessel and x-ray angiogram simulation.
Song, Shuang; Yang, Jian; Fan, Jingfan; Cong, Weijian; Ai, Danni; Zhao, Yitian; Wang, Yongtian
2016-01-01
This study proposes a novel geometrical force constraint method for 3-D vasculature modeling and angiographic image simulation. For this method, space filling force, gravitational force, and topological preserving force are proposed and combined for the optimization of the topology of the vascular structure. The surface covering force and surface adhesion force are constructed to drive the growth of the vasculature on any surface. According to the combination effects of the topological and surface adhering forces, a realistic vasculature can be effectively simulated on any surface. The image projection of the generated 3-D vascular structures is simulated according to the perspective projection and energy attenuation principles of X-rays. Finally, the simulated projection vasculature is fused with a predefined angiographic mask image to generate a realistic angiogram. The proposed method is evaluated on a CT image and three generally utilized surfaces. The results fully demonstrate the effectiveness and robustness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zhigang; Chun, Jaehun; Chatterjee, Sayandev
Detailed knowledge of the forces between nanocrystals is very crucial for understanding many generic (e.g., random aggregation/assembly and rheology) and specific (e.g., oriented attachment) phenomena at macroscopic length scales, especially considering the additional complexities involved in nanocrystals such as crystal orientation and corresponding orientation-dependent physicochemical properties. Because there are a limited number of methods to directly measure the forces, little is known about the forces that drive the various emergent phenomena. Here we report on two methods of preparing crystals as force measurement tips used in an atomic force microscope (AFM): the focused ion beam method and microlithography method. Themore » desired crystals are fabricated using these two methods and are fixed to the AFM probe using platinum deposition, ultraviolet epoxy, or resin, which allows for the orientation-dependent force measurements. These two methods can be used to attach virtually any solid particles (from the size of a few hundreds of nanometers to millimeters). We demonstrate the force measurements between aqueous media under different conditions such as pH.« less
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Sweetman, Adam; Stannard, Andrew
2014-01-01
In principle, non-contact atomic force microscopy (NC-AFM) now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired 'short-range' force from the experimental observable (frequency shift) is often far from trivial. In most cases there is a significant contribution to the total tip-sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the 'on-minus-off' method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method.
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
Do the peak and mean force methods of assessing vertical jump force asymmetry agree?
Lake, Jason P; Mundy, Peter D; Comfort, Paul; Suchomel, Timothy J
2018-05-21
The aim of this study was to assess agreement between peak and mean force methods of quantifying force asymmetry during the countermovement jump (CMJ). Forty-five men performed four CMJ with each foot on one of two force plates recording at 1,000 Hz. Peak and mean were obtained from both sides during the braking and propulsion phases. The dominant side was obtained for the braking and propulsion phase as the side with the largest peak or mean force and agreement was assessed using percentage agreement and the kappa coefficient. Braking phase peak and mean force methods demonstrated a percentage agreement of 84% and a kappa value of 0.67 (95% confidence limits: 0.45-0.90), indicating substantial agreement. Propulsion phase peak and mean force methods demonstrated a percentage agreement of 87% and a kappa value of 0.72 (95% confidence limits: 0.51-0.93), indicating substantial agreement. While agreement was substantial, side-to-side differences were not reflected equally when peak and mean force methods of assessing CMJ asymmetry were used. These methods should not be used interchangeably, but rather a combined approach should be used where practitioners consider both peak and mean force to obtain the fullest picture of athlete asymmetry.
Probing fibronectin–antibody interactions using AFM force spectroscopy and lateral force microscopy
Kulik, Andrzej J; Lee, Kyumin; Pyka-Fościak, Grazyna; Nowak, Wieslaw
2015-01-01
Summary The first experiment showing the effects of specific interaction forces using lateral force microscopy (LFM) was demonstrated for lectin–carbohydrate interactions some years ago. Such measurements are possible under the assumption that specific forces strongly dominate over the non-specific ones. However, obtaining quantitative results requires the complex and tedious calibration of a torsional force. Here, a new and relatively simple method for the calibration of the torsional force is presented. The proposed calibration method is validated through the measurement of the interaction forces between human fibronectin and its monoclonal antibody. The results obtained using LFM and AFM-based classical force spectroscopies showed similar unbinding forces recorded at similar loading rates. Our studies verify that the proposed lateral force calibration method can be applied to study single molecule interactions. PMID:26114080
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
Development of a commercially viable piezoelectric force sensor system for static force measurement
NASA Astrophysics Data System (ADS)
Liu, Jun; Luo, Xinwei; Liu, Jingcheng; Li, Min; Qin, Lan
2017-09-01
A compensation method for measuring static force with a commercial piezoelectric force sensor is proposed to disprove the theory that piezoelectric sensors and generators can only operate under dynamic force. After studying the model of the piezoelectric force sensor measurement system, the principle of static force measurement using a piezoelectric material or piezoelectric force sensor is analyzed. Then, the distribution law of the decay time constant of the measurement system and the variation law of the measurement system’s output are studied, and a compensation method based on the time interval threshold Δ t and attenuation threshold Δ {{u}th} is proposed. By calibrating the system and considering the influences of the environment and the hardware, a suitable Δ {{u}th} value is determined, and the system’s output attenuation is compensated based on the Δ {{u}th} value to realize the measurement. Finally, a static force measurement system with a piezoelectric force sensor is developed based on the compensation method. The experimental results confirm the successful development of a simple compensation method for static force measurement with a commercial piezoelectric force sensor. In addition, it is established that, contrary to the current perception, a piezoelectric force sensor system can be used to measure static force through further calibration.
Calculation of forces on magnetized bodies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, John
1987-01-01
The methods described may be used with a high degree of confidence for calculations of magnetic traction forces normal to a surface. In this circumstance all models agree, and test cases have resulted in theoretically correct results. It is shown that the tangential forces are in practice negligible. The surface pole method is preferable to the virtual work method because of the necessity for more than one NASTRAN run in the latter case, and because distributed forces are obtained. The derivation of local forces from the Maxwell stress method involves an undesirable degree of manipulation of the problem and produces a result in contradiction of the surface pole method.
A New Method of Comparing Forcing Agents in Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.
We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approachmore » for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.« less
Force measuring valve assemblies, systems including such valve assemblies and related methods
DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID
2012-04-17
Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.
Precision Mechanical Measurement Using the Levitation Mass Method (LMM)
NASA Astrophysics Data System (ADS)
Fujii, Yusaku; Jin, Tao; Maru, Koichi
2010-12-01
The present status and the future prospects of a method for precision mass and force measurement, the levitation mass method (LMM), are reviewed. The LMM has been proposed and improved by the authors. In the LMM, the inertial force of a mass levitated using a pneumatic linear bearing is used as the reference force applied to the objects under test, such as force transducers, materials or structures. The inertial force of the levitated mass is measured using an optical interferometer. The three typical applications of the LMM, i.e. the dynamic force calibration, the micro force material tester and the space scale, are reviewed in this paper.
NASA Astrophysics Data System (ADS)
Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.
2011-12-01
The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Cutting Force Predication Based on Integration of Symmetric Fuzzy Number and Finite Element Method
Wang, Zhanli; Hu, Yanjuan; Wang, Yao; Dong, Chao; Pang, Zaixiang
2014-01-01
In the process of turning, pointing at the uncertain phenomenon of cutting which is caused by the disturbance of random factors, for determining the uncertain scope of cutting force, the integrated symmetric fuzzy number and the finite element method (FEM) are used in the prediction of cutting force. The method used symmetric fuzzy number to establish fuzzy function between cutting force and three factors and obtained the uncertain interval of cutting force by linear programming. At the same time, the change curve of cutting force with time was directly simulated by using thermal-mechanical coupling FEM; also the nonuniform stress field and temperature distribution of workpiece, tool, and chip under the action of thermal-mechanical coupling were simulated. The experimental result shows that the method is effective for the uncertain prediction of cutting force. PMID:24790556
Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.
Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang
2018-02-24
This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.
Construction of Intelligent Massage System Based on Human Skin-Muscle Elasticity
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
A present massage chair realizes the massage motion and force designed by a professional masseur. However, appropriate massage force to the user cannot be provided by the massage chair in such a method. On the other hand, the professional masseur can realize an appropriate massage force to more than one patient, because, the masseur considers the physical condition of the patient. This paper proposes the method of applying masseur's procedure to the massage chair. Then, the proposed method is composed by estimation of the physical condition of user, decision of massage force based on the physical condition and realization of massage force by the force control. The realizability of the proposed method is verified by the experimental work using the massage chair.
On the Privacy Protection of Biometric Traits: Palmprint, Face, and Signature
NASA Astrophysics Data System (ADS)
Panigrahy, Saroj Kumar; Jena, Debasish; Korra, Sathya Babu; Jena, Sanjay Kumar
Biometrics are expected to add a new level of security to applications, as a person attempting access must prove who he or she really is by presenting a biometric to the system. The recent developments in the biometrics area have lead to smaller, faster and cheaper systems, which in turn has increased the number of possible application areas for biometric identity verification. The biometric data, being derived from human bodies (and especially when used to identify or verify those bodies) is considered personally identifiable information (PII). The collection, use and disclosure of biometric data — image or template, invokes rights on the part of an individual and obligations on the part of an organization. As biometric uses and databases grow, so do concerns that the personal data collected will not be used in reasonable and accountable ways. Privacy concerns arise when biometric data are used for secondary purposes, invoking function creep, data matching, aggregation, surveillance and profiling. Biometric data transmitted across networks and stored in various databases by others can also be stolen, copied, or otherwise misused in ways that can materially affect the individual involved. As Biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analysed before they are massively deployed in security systems. Along with security, also the privacy of the users is an important factor as the constructions of lines in palmprints contain personal characteristics, from face images a person can be recognised, and fake signatures can be practised by carefully watching the signature images available in the database. We propose a cryptographic approach to encrypt the images of palmprints, faces, and signatures by an advanced Hill cipher technique for hiding the information in the images. It also provides security to these images from being attacked by above mentioned attacks. So, during the feature extraction, the encrypted images are first decrypted, then the features are extracted, and used for identification or verification.
Optical Linear Algebra for Computational Light Transport
NASA Astrophysics Data System (ADS)
O'Toole, Matthew
Active illumination refers to optical techniques that use controllable lights and cameras to analyze the way light propagates through the world. These techniques confer many unique imaging capabilities (e.g. high-precision 3D scanning, image-based relighting, imaging through scattering media), but at a significant cost; they often require long acquisition and processing times, rely on predictive models for light transport, and cease to function when exposed to bright ambient sunlight. We develop a mathematical framework for describing and analyzing such imaging techniques. This framework is deeply rooted in numerical linear algebra, and models the transfer of radiant energy through an unknown environment with the so-called light transport matrix. Performing active illumination on a scene equates to applying a numerical operator on this unknown matrix. The brute-force approach to active illumination follows a two-step procedure: (1) optically measure the light transport matrix and (2) evaluate the matrix operator numerically. This approach is infeasible in general, because the light transport matrix is often much too large to measure, store, and analyze directly. Using principles from optical linear algebra, we evaluate these matrix operators in the optical domain, without ever measuring the light transport matrix in the first place. Specifically, we explore numerical algorithms that can be implemented partially or fully with programmable optics. These optical algorithms provide solutions to many longstanding problems in computer vision and graphics, including the ability to (1) photo-realistically change the illumination conditions of a given photo with only a handful of measurements, (2) accurately capture the 3D shape of objects in the presence of complex transport properties and strong ambient illumination, and (3) overcome the multipath interference problem associated with time-of-flight cameras. Most importantly, we introduce an all-new imaging regime---optical probing---that provides unprecedented control over which light paths contribute to a photo.
Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT
Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster
2016-01-01
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701
A Modern Picture of Barred Galaxy Dynamics
NASA Astrophysics Data System (ADS)
Petersen, Michael; Weinberg, Martin; Katz, Neal
2018-01-01
Observations of disk galaxies suggest that bars are responsible for altering global galaxy parameters (e.g. structures, gas fraction, star formation rate). The canonical understanding of the mechanisms underpinning bar-driven secular dynamics in disk galaxies has been largely built upon the analysis of linear theory, despite galactic bars being clearly demonstrated to be nonlinear phenomena in n-body simulations. We present simulations of barred Milky Way-like galaxy models designed to elucidate nonlinear barred galaxy dynamics. We have developed two new methodologies for analyzing n-body simulations that give the best of both powerful analytic linear theory and brute force simulation analysis: orbit family identification and multicomponent torque analysis. The software will be offered publicly to the community for their own simulation analysis.The orbit classifier reveals that the details of kinematic components in galactic disks (e.g. the bar, bulge, thin disk, and thick disk components) are powerful discriminators of evolutionary paradigms (i.e. violent instabilities and secular evolution) as well as the basic parameters of the dark matter halo (mass distribution, angular momentum distribution). Multicomponent torque analysis provides a thorough accounting of the transfer of angular momentum between orbits, global patterns, and distinct components in order to better explain the underlying physics which govern the secular evolution of barred disk galaxies.Using these methodologies, we are able to identify the successes and failures of linear theory and traditional n-body simulations en route to a detailed understanding of the control bars exhibit over secular evolution in galaxies. We present explanations for observed physical and velocity structures in observations of barred galaxies alongside predictions for how structures will vary with dynamical properties from galaxy to galaxy as well as over the lifetime of a galaxy, finding that the transfer of angular momentum through previously unidentified channels can more fully explain the observed dynamics.
1962-10-26
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This construction photo, taken October 26, 1962, depicts a view of the Block House tunnel opening.
1962-08-17
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This construction photo taken August 17, 1962 depicts a back side view of the Block House.
1963-10-22
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the progress of the S-IC test stand as of October 22, 1963. Spherical liquid hydrogen tanks can be seen to the left. Just to the lower front of those are the cylindrical liquid oxygen (LOX) tanks.
1962-11-15
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This construction photo, taken November 15, 1962, depicts a view of the Block House.
1962-10-08
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This construction photo, taken October 8, 1962, depicts a front view of the Block House nearing completion.
Bumping into the Butterfly, When I Was But a Bud
NASA Astrophysics Data System (ADS)
Hofstadter, Douglas
I will recount the main events that led me to discover the so-called ''Hofstadter butterfly'' when I was a physics student, over 40 years ago. A key moment in the tale was when, after years of futile struggle, I finally abandoned particle physics, and chose, with much trepidation, to try solid-state physics instead, a field of which I knew nothing at all. I was instinctively drawn to a long-standing classic unsolved problem in the field - What is the nature of the energy spectrum of Bloch electrons in a magnetic field? - when Professor Gregory Wannier told me that it involved a weird distinction between ''rational'' and ''irrational'' magnetic fields, which neither he nor anyone else understood. This mystery allured me, as I was sure that the rational/irrational distinction cannot possibly play a role in physical phenomena. I tried manipulating equations for a long time but was unable to make any headway, and so, as a last resort, I wound up using brute-force calculation instead. I programmed a small desktop computer to give me numbers that I then plotted by hand on paper, and one fine day, to my shock, my eyes suddenly recognized a remarkable type of pattern that I had discovered twelve years earlier, when I was an undergraduate math major exploring number theory. All at once, I realized that the theoretical energy spectrum I'd plotted by hand consisted of infinitely many copies of itself, nested infinitely deeply, and it looked a little like a butterfly, whence its name. This unanticipated discovery eventually led to many new insights into the behavior of quantum systems featuring two competing periodicities. I will briefly describe some of the consequences I found back then of the infinitely nested spectrum, and in particular how the baffling rational/irrational distinction melted away, once the butterfly's nature had been deeply understood.
1963-09-05
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. In the center portion of this photograph, taken September 5, 1963, the spherical hydrogen storage tanks are being constructed. One of the massive tower legs of the S-IC test stand is visible to the far right.
Bourgois, Philippe; Hart, Laurie Kain
2016-01-01
RÉSUMÉ/ABSTRACT Fondé sur un travail de terrain mené pendant cinq ans dans le ghetto portoricain de Philadelphie, cet article explore les logiques de violence et de paix à l’œuvre dans ce secteur situé en fin de circuit de l’industrie globale du narcotrafic. Tout en recourant à la violence armée pour éliminer leurs rivaux et défendre leur territoire, les patrons locaux de la drogue doivent simultanément renvoyer l’image de figures généreuses pour éviter que les habitants ne les dénoncent à la police, en se montrant responsables, prêts à redistribuer les ressources, à offrir des emplois bien rémunérés, à discipliner leurs employés et à contenir les excès de violence. Les chefs de la drogue sont donc contraints de transformer leur force brute en un pouvoir vertueux pour prospérer. Ils participent ainsi, avec voisins et employés, à une économie morale de relations patrimoniales et clientélistes au sein de laquelle ils s’imposent comme des leaders charismatiques. À partir d’une relecture du concept d’ « accumulation primitive », cet article revient à la fois : 1) sur la relation coloniale qui pousse la diaspora portoricaine ghettoïsée dans la niche économique que représente la revente de drogue au détail; 2) sur la violence symbolique à l’œuvre à tous les échelons de ce trafic; 3) sur les profits artificiellement élevés générés par cette industrie criminalisée par l’État; 4) et sur la prolifération opportuniste de secteurs spécialisés de l’économie légale et de la bureaucratie publique chargés de gérer les effets collatéraux de la coercition et des violences d’État. PMID:28090135
Jaeger, Carsten; Méret, Michaël; Schmitt, Clemens A; Lisec, Jan
2017-08-15
A bottleneck in metabolic profiling of complex biological extracts is confident, non-supervised annotation of ideally all contained, chemically highly diverse small molecules. Recent computational strategies combining sum formula prediction with in silico fragmentation achieve confident de novo annotation, once the correct neutral mass of a compound is known. Current software solutions for automated adduct ion assignment, however, are either publicly unavailable or have been validated against only few experimental electrospray ionization (ESI) mass spectra. We here present findMAIN (find Main Adduct IoN), a new heuristic approach for interpreting ESI mass spectra. findMAIN scores MS 1 spectra based on explained intensity, mass accuracy and isotope charge agreement of adducts and related ionization products and annotates peaks of the (de)protonated molecule and adduct ions. The approach was validated against 1141 ESI positive mode spectra of chemically diverse standard compounds acquired on different high-resolution mass spectrometric instruments (Orbitrap and time-of-flight). Robustness against impure spectra was evaluated. Correct adduct ion assignment was achieved for up to 83% of the spectra. Performance was independent of compound class and mass spectrometric platform. The algorithm proved highly tolerant against spectral contamination as demonstrated exemplarily for co-eluting compounds as well as systematically by pairwise mixing of spectra. When used in conjunction with MS-FINDER, a state-of-the-art sum formula tool, correct sum formulas were obtained for 77% of spectra. It outperformed both 'brute force' approaches and current state-of-the-art annotation packages tested as potential alternatives. Limitations of the heuristic pertained to poorly ionizing compounds and cationic compounds forming [M] + ions. A new, validated approach for interpreting ESI mass spectra is presented, filling a gap in the nontargeted metabolomics workflow. It is freely available in the latest version of R package InterpretMSSpectrum. Copyright © 2017 John Wiley & Sons, Ltd.
1962-01-23
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This photo, taken January 23, 1962, shows the excavation of the Block House site.
1962-04-16
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. After a 6 month delay in construction due to size reconfiguration of the Saturn booster, the site was revisited for modifications. The original foundation walls built in the prior year had to be torn down and re-poured to accommodate the larger booster. The demolition can be seen in this photograph taken on April 16, 1962.
1962-06-13
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. After a six month delay in construction due to size reconfiguration of the Saturn booster, the site was revisited for modifications in March 1962. The original foundation walls built in the prior year were torn down and re-poured to accommodate the larger boosters. This photo depicts that modification progress as of June 13,1962.
1962-06-13
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. Construction of the tunnel is depicted in this photo taken June 13, 1962.
1962-02-02
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In addition to the stand itself, related facilities were constructed during this time. Built directly east of the test stand was the Block House, which served as the control center for the test stand. The two were connected by a narrow access tunnel which housed the cables for the controls. This photo, taken February 2, 1962, shows the excavation of the Block House site.
1962-05-21
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. After a 6 month delay in construction due to size reconfiguration of the Saturn booster, the site was revisited for modifications. The original foundation walls built in the prior year had to be torn down and re-poured to accommodate the larger booster. The demolition can be seen in this photograph taken on May 21, 1962.
The traveling salesman problem in surgery: economy of motion for the FLS Peg Transfer task.
Falcone, John L; Chen, Xiaotian; Hamad, Giselle G
2013-05-01
In the Peg Transfer task in the Fundamentals of Laparoscopic Surgery (FLS) curriculum, six peg objects are sequentially transferred in a bimanual fashion using laparoscopic instruments across a pegboard and back. There are over 268 trillion ways of completing this task. In the setting of many possibilities, the traveling salesman problem is one where the objective is to solve for the shortest distance traveled through a fixed number of points. The goal of this study is to apply the traveling salesman problem to find the shortest two-dimensional path length for this task. A database platform was used with permutation application output to generate all of the single-direction solutions of the FLS Peg Transfer task. A brute-force search was performed using nested Boolean operators and database equations to calculate the overall two-dimensional distances for the efficient and inefficient solutions. The solutions were found by evaluating peg object transfer distances and distances between transfers for the nondominant and dominant hands. For the 518,400 unique single-direction permutations, the mean total two-dimensional peg object travel distance was 33.3 ± 1.4 cm. The range in distances was from 30.3 to 36.5 cm. There were 1,440 (0.28 %) of 518,400 efficient solutions with the minimized peg object travel distance of 30.3 cm. There were 8 (0.0015 %) of 518,400 solutions in the final solution set that minimized the distance of peg object transfer and minimized the distance traveled between peg transfers. Peg objects moved 12.7 cm (17.4 %) less in the efficient solutions compared to the inefficient solutions. The traveling salesman problem can be applied to find efficient solutions for surgical tasks. The eight solutions to the FLS Peg Transfer task are important for any examinee taking the FLS curriculum and for certification by the American Board of Surgery.
1961-06-01
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photo, taken July 13, 1961, progress is being made with the excavation of the S-IC test stand site. During the digging, a natural spring was disturbed which caused a constant flooding problem. Pumps were used to remove the water all through the construction process and the site is still pumped today.
1963-03-29
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In the early stages of excavation, a natural spring was disturbed that caused a water problem which required constant pumping from the site and is even pumped to this day. Behind this reservoir of pumped water is the S-IC test stand boasting its ever-growing four towers as of March 29, 1963.
1961-08-05
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. In this photograph taken on August 5th, 1961, a back hoe is nearly submerged in water in the test stand site. During the initial digging, the disturbance of a natural spring contributed to constant water problems during the construction process. It was necessary to pump the water from the site on a daily basis and is still pumped from the site today.
1961-08-14
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo shows the construction progress of the test stand as of August 14, 1961. Water gushing in from the disturbance of a natural spring contributed to constant water problems during the construction process. It was necessary to pump water from the site on a daily basis and is still pumped from the site today. The equipment is partially submerged in the water emerging from the spring.
1961-09-05
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 5, 1961, shows pumps used for extracting water emerging form a disturbed natural spring that occurred during the excavation of the site. The pumping became a daily ritual and the site is still pumped today.
1961-09-05
At its founding, the Marshall Space Flight Center (MSFC) inherited the Army’s Jupiter and Redstone test stands, but much larger facilities were needed for the giant stages of the Saturn V. From 1960 to 1964, the existing stands were remodeled and a sizable new test area was developed. The new comprehensive test complex for propulsion and structural dynamics was unique within the nation and the free world, and they remain so today because they were constructed with foresight to meet the future as well as on going needs. Construction of the S-IC Static test stand complex began in 1961 in the west test area of MSFC, and was completed in 1964. The S-IC static test stand was designed to develop and test the 138-ft long and 33-ft diameter Saturn V S-IC first stage, or booster stage, weighing in at 280,000 pounds. Required to hold down the brute force of a 7,500,000-pound thrust produced by 5 F-1 engines, the S-IC static test stand was designed and constructed with the strength of hundreds of tons of steel and 12,000,000 pounds of cement, planted down to bedrock 40 feet below ground level. The foundation walls, constructed with concrete and steel, are 4 feet thick. The base structure consists of four towers with 40-foot-thick walls extending upward 144 feet above ground level. The structure was topped by a crane with a 135-foot boom. With the boom in the upright position, the stand was given an overall height of 405 feet, placing it among the highest structures in Alabama at the time. This photo, taken September 5, 1961, shows the construction of forms which became the concrete foundation for the massive stand. The lower right hand corner reveals a pump used for extracting water emerging from a disturbed natural spring that occurred during excavation of the site. The pumping became a daily ritual and the site is still pumped today.
Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment
NASA Astrophysics Data System (ADS)
Bhagawati, Dwipen
2001-07-01
Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.
Force analysis of magnetic bearings with power-saving controls
NASA Technical Reports Server (NTRS)
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1992-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. For most operating conditions, the existence of the bias current requires more power than alternative methods that do not use conventional bias. Two such methods are examined which diminish or eliminate bias current. In the typical bias control scheme it is found that for a harmonic control force command into a voltage limited transconductance amplifier, the desired force output is obtained only up to certain combinations of force amplitude and frequency. Above these values, the force amplitude is reduced and a phase lag occurs. The power saving alternative control schemes typically exhibit such deficiencies at even lower command frequencies and amplitudes. To assess the severity of these effects, a time history analysis of the force output is performed for the bias method and the alternative methods. Results of the analysis show that the alternative approaches may be viable. The various control methods examined were mathematically modeled using nondimensionalized variables to facilitate comparison of the various methods.
Features calibration of the dynamic force transducers
NASA Astrophysics Data System (ADS)
Sc., M. Yu Prilepko D.; Lysenko, V. G.
2018-04-01
The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Method of Calibrating a Force Balance
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)
2015-01-01
A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.
Dynamic Loads Generation for Multi-Point Vibration Excitation Problems
NASA Technical Reports Server (NTRS)
Shen, Lawrence
2011-01-01
A random-force method has been developed to predict dynamic loads produced by rocket-engine random vibrations for new rocket-engine designs. The method develops random forces at multiple excitation points based on random vibration environments scaled from accelerometer data obtained during hot-fire tests of existing rocket engines. This random-force method applies random forces to the model and creates expected dynamic response in a manner that simulates the way the operating engine applies self-generated random vibration forces (random pressure acting on an area) with the resulting responses that we measure with accelerometers. This innovation includes the methodology (implementation sequence), the computer code, two methods to generate the random-force vibration spectra, and two methods to reduce some of the inherent conservatism in the dynamic loads. This methodology would be implemented to generate the random-force spectra at excitation nodes without requiring the use of artificial boundary conditions in a finite element model. More accurate random dynamic loads than those predicted by current industry methods can then be generated using the random force spectra. The scaling method used to develop the initial power spectral density (PSD) environments for deriving the random forces for the rocket engine case is based on the Barrett Criteria developed at Marshall Space Flight Center in 1963. This invention approach can be applied in the aerospace, automotive, and other industries to obtain reliable dynamic loads and responses from a finite element model for any structure subject to multipoint random vibration excitations.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Method for lateral force calibration in atomic force microscope using MEMS microforce sensor.
Dziekoński, Cezary; Dera, Wojciech; Jarząbek, Dariusz M
2017-11-01
In this paper we present a simple and direct method for the lateral force calibration constant determination. Our procedure does not require any knowledge about material or geometrical parameters of an investigated cantilever. We apply a commercially available microforce sensor with advanced electronics for direct measurement of the friction force applied by the cantilever's tip to a flat surface of the microforce sensor measuring beam. Due to the third law of dynamics, the friction force of the equal value tilts the AFM cantilever. Therefore, torsional (lateral force) signal is compared with the signal from the microforce sensor and the lateral force calibration constant is determined. The method is easy to perform and could be widely used for the lateral force calibration constant determination in many types of atomic force microscopes. Copyright © 2017 Elsevier B.V. All rights reserved.
Remotely adjustable fishing jar and method for using same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyatt, W.B.
1992-10-20
This patent describes a method for providing a jarring force to dislodge objects stuck in well bores, the method it comprises: connecting a jarring tool between an operating string and an object in a well bore; selecting a jarring force to be applied to the object; setting the selected reference jarring force into a mechanical memory mechanism by progressively engaging a first latch body and a second latch body; retaining the reference jarring force in the mechanical memory mechanism during diminution of tensional force applied by the operating string; and initiating an upwardly directed impact force within the jarring toolmore » by increasing tensional force on the operating string to a value greater than the tensional force corresponding with the selected jarring force. This patent also describes a remotely adjustable downhole fishing jar apparatus comprising: an operating mandrel; an impact release spring; a mechanical memory mechanism; and releasable latching means.« less
Human grasp assist device and method of use
NASA Technical Reports Server (NTRS)
Linn, Douglas Martin (Inventor); Ihrke, Chris A. (Inventor); Diftler, Myron A. (Inventor)
2012-01-01
A grasp assist device includes a glove portion having phalange rings, contact sensors for measuring a grasping force applied by an operator wearing the glove portion, and a tendon drive system (TDS). The device has flexible tendons connected to the phalange rings for moving the rings in response to feedback signals from the sensors. The TDS is connected to each of the tendons, and applies an augmenting tensile force thereto via a microcontroller adapted for determining the augmenting tensile force as a function of the grasping force. A method of augmenting a grasping force of an operator includes measuring the grasping force using the sensors, encoding the grasping force as the feedback signals, and calculating the augmenting tensile force as a function of the feedback signals using the microcontroller. The method includes energizing at least one actuator of a tendon drive system (TDS) to thereby apply the augmenting tensile force.
Influences of rolling method on deformation force in cold roll-beating forming process
NASA Astrophysics Data System (ADS)
Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan
2018-03-01
In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.
The added mass forces in insect flapping wings.
Liu, Longgui; Sun, Mao
2018-01-21
The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Berke, L.; Gallagher, R. H.
1991-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EEs) are integrated with the global compatibility conditions (CCs) to form the governing set of equations. In IFM the CCs are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Methodes de calcul des forces aerodynamiques pour les etudes des interactions aeroservoelastiques
NASA Astrophysics Data System (ADS)
Biskri, Djallel Eddine
L'aeroservoelasticite est un domaine ou interagissent la structure flexible d'un avion, l'aerodynamique et la commande de vol. De son cote, la commande du vol considere l'avion comme une structure rigide et etudie l'influence du systeme de commande sur la dynamique de vol. Dans cette these, nous avons code trois nouvelles methodes d'approximation de forces aerodynamiques: Moindres carres corriges, Etat minimal corrige et Etats combines. Dans les deux premieres methodes, les erreurs d'approximation entre les forces aerodynamiques approximees par les methodes classiques et celles obtenues par les nouvelles methodes ont les memes formes analytiques que celles des forces aerodynamiques calculees par LS ou MS. Quant a la troisieme methode, celle-ci combine les formulations des forces approximees avec les methodes standards LS et MS. Les vitesses et frequences de battement et les temps d'executions calcules par les nouvelles methodes versus ceux calcules par les methodes classiques ont ete analyses.
What is the best method for assessing lower limb force-velocity relationship?
Giroux, C; Rabita, G; Chollet, D; Guilhem, G
2015-02-01
This study determined the concurrent validity and reliability of force, velocity and power measurements provided by accelerometry, linear position transducer and Samozino's methods, during loaded squat jumps. 17 subjects performed squat jumps on 2 separate occasions in 7 loading conditions (0-60% of the maximal concentric load). Force, velocity and power patterns were averaged over the push-off phase using accelerometry, linear position transducer and a method based on key positions measurements during squat jump, and compared to force plate measurements. Concurrent validity analyses indicated very good agreement with the reference method (CV=6.4-14.5%). Force, velocity and power patterns comparison confirmed the agreement with slight differences for high-velocity movements. The validity of measurements was equivalent for all tested methods (r=0.87-0.98). Bland-Altman plots showed a lower agreement for velocity and power compared to force. Mean force, velocity and power were reliable for all methods (ICC=0.84-0.99), especially for Samozino's method (CV=2.7-8.6%). Our findings showed that present methods are valid and reliable in different loading conditions and permit between-session comparisons and characterization of training-induced effects. While linear position transducer and accelerometer allow for examining the whole time-course of kinetic patterns, Samozino's method benefits from a better reliability and ease of processing. © Georg Thieme Verlag KG Stuttgart · New York.
Research on the comparison of performance-based concept and force-based concept
NASA Astrophysics Data System (ADS)
Wu, Zeyu; Wang, Dongwei
2011-03-01
There are two ideologies about structure design: force-based concept and performance-based concept. Generally, if the structure operates during elastic stage, the two philosophies usually attain the same results. But beyond that stage, the shortage of force-based method is exposed, and the merit of performance-based is displayed. Pros and cons of each strategy are listed herein, and then which structure is best suitable to each method analyzed. At last, a real structure is evaluated by adaptive pushover method to verify that performance-based method is better than force-based method.
Determination of thermodynamics and kinetics of RNA reactions by force
Tinoco, Ignacio; Li, Pan T. X.; Bustamante, Carlos
2008-01-01
Single-molecule methods have made it possible to apply force to an individual RNA molecule. Two beads are attached to the RNA; one is on a micropipette, the other is in a laser trap. The force on the RNA and the distance between the beads are measured. Force can change the equilibrium and the rate of any reaction in which the product has a different extension from the reactant. This review describes use of laser tweezers to measure thermodynamics and kinetics of unfolding/refolding RNA. For a reversible reaction the work directly provides the free energy; for irreversible reactions the free energy is obtained from the distribution of work values. The rate constants for the folding and unfolding reactions can be measured by several methods. The effect of pulling rate on the distribution of force-unfolding values leads to rate constants for unfolding. Hopping of the RNA between folded and unfolded states at constant force provides both unfolding and folding rates. Force-jumps and force-drops, similar to the temperature jump method, provide direct measurement of reaction rates over a wide range of forces. The advantages of applying force and using single-molecule methods are discussed. These methods, for example, allow reactions to be studied in non-denaturing solvents at physiological temperatures; they also simplify analysis of kinetic mechanisms because only one intermediate at a time is present. Unfolding of RNA in biological cells by helicases, or ribosomes, has similarities to unfolding by force. PMID:17040613
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
NASA Astrophysics Data System (ADS)
Abdel-Jaber, H.; Glisic, B.
2014-07-01
Structural health monitoring (SHM) consists of the continuous or periodic measurement of structural parameters and their analysis with the aim of deducing information about the performance and health condition of a structure. The significant increase in the construction of prestressed concrete bridges motivated this research on an SHM method for the on-site determination of the distribution of prestressing forces along prestressed concrete beam structures. The estimation of the distribution of forces is important as it can give information regarding the overall performance and structural integrity of the bridge. An inadequate transfer of the designed prestressing forces to the concrete cross-section can lead to a reduced capacity of the bridge and consequently malfunction or failure at lower loads than predicted by design. This paper researches a universal method for the determination of the distribution of prestressing forces along concrete beam structures at the time of transfer of the prestressing force (e.g., at the time of prestressing or post-tensioning). The method is based on the use of long-gauge fiber optic sensors, and the sensor network is similar (practically identical) to the one used for damage identification. The method encompasses the determination of prestressing forces at both healthy and cracked cross-sections, and for the latter it can yield information about the condition of the cracks. The method is validated on-site by comparison to design forces through the application to two structures: (1) a deck-stiffened arch and (2) a curved continuous girder. The uncertainty in the determination of prestressing forces was calculated and the comparison with the design forces has shown very good agreement in most of the structures’ cross-sections, but also helped identify some unusual behaviors. The method and its validation are presented in this paper.
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
Opitz, Donald L
2013-03-01
The founding of Britain's first horticultural college in 1889 advanced a scientific and coeducational response to three troubling national concerns: a major agricultural depression; the economic distress of single, unemployed women; and imperatives to develop the colonies. Buoyed by the technical instruction and women's movements, the Horticultural College and Produce Company, Limited, at Swanley, Kent, crystallized a transformation in the horticultural profession in which new science-based, formalized study threatened an earlier emphasis on practical apprenticeship training, with the effect of opening male-dominated trades to women practitioners. By 1903, the college closed its doors to male students, and new pathways were forged for women students interested in pursuing further scientific study. Resistance to the Horticultural College's model of science-based women's horticultural education positioned science and women as contested subjects throughout this period of horticulture's expansion in the academy.
Measurement of Vehicle-Bridge-Interaction force using dynamic tire pressure monitoring
NASA Astrophysics Data System (ADS)
Chen, Zhao; Xie, Zhipeng; Zhang, Jian
2018-05-01
The Vehicle-Bridge-Interaction (VBI) force, i.e., the normal contact force of a tire, is a key component in the VBI mechanism. The VBI force measurement can facilitate experimental studies of the VBI as well as input-output bridge structural identification. This paper introduces an innovative method for calculating the interaction force by using dynamic tire pressure monitoring. The core idea of the proposed method combines the ideal gas law and a basic force model to build a relationship between the tire pressure and the VBI force. Then, unknown model parameters are identified by the Extended Kalman Filter using calibration data. A signal filter based on the wavelet analysis is applied to preprocess the effect that the tire rotation has on the pressure data. Two laboratory tests were conducted to check the proposed method's validity. The effects of different road irregularities, loads and forward velocities were studied. Under the current experiment setting, the proposed method was robust to different road irregularities, and the increase in load and velocity benefited the performance of the proposed method. A high-speed test further supported the use of this method in rapid bridge tests. Limitations of the derived theories and experiment were also discussed.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
A method of assigning socio-economic status classification to British Armed Forces personnel.
Yoong, S Y; Miles, D; McKinney, P A; Smith, I J; Spencer, N J
1999-10-01
The objective of this paper was to develop and evaluate a socio-economic status classification method for British Armed Forces personnel. Two study groups comprising of civilian and Armed Forces families were identified from livebirths delivered between 1 January-30 June 1996 within the Northallerton Health district which includes Catterick Garrison and RAF Leeming. The participants were the parents of babies delivered at a District General Hospital, comprising of 436 civilian and 162 Armed Forces families. A new classification method was successfully used to assign Registrar General's social classification to Armed Forces personnel. Comparison of the two study groups showed a significant difference in social class distribution (p = 0.0001). This study has devised a new method for classifying occupations within the Armed Forces to categories of social class thus permitting comparison with Registrar General's classification.
Enhancement of force patterns classification based on Gaussian distributions.
Ertelt, Thomas; Solomonovs, Ilja; Gronwald, Thomas
2018-01-23
Description of the patterns of ground reaction force is a standard method in areas such as medicine, biomechanics and robotics. The fundamental parameter is the time course of the force, which is classified visually in particular in the field of clinical diagnostics. Here, the knowledge and experience of the diagnostician is relevant for its assessment. For an objective and valid discrimination of the ground reaction force pattern, a generic method, especially in the medical field, is absolutely necessary to describe the qualities of the time-course. The aim of the presented method was to combine the approaches of two existing procedures from the fields of machine learning and the Gauss approximation in order to take advantages of both methods for the classification of ground reaction force patterns. The current limitations of both methods could be eliminated by an overarching method. Twenty-nine male athletes from different sports were examined. Each participant was given the task of performing a one-legged stopping maneuver on a force plate from the maximum possible starting speed. The individual time course of the ground reaction force of each subject was registered and approximated on the basis of eight Gaussian distributions. The descriptive coefficients were then classified using Bayesian regulated neural networks. The different sports served as the distinguishing feature. Although the athletes were all given the same task, all sports referred to a different quality in the time course of ground reaction force. Meanwhile within each sport, the athletes were homogeneous. With an overall prediction (R = 0.938) all subjects/sports were classified correctly with 94.29% accuracy. The combination of the two methods: the mathematical description of the time course of ground reaction forces on the basis of Gaussian distributions and their classification by means of Bayesian regulated neural networks, seems an adequate and promising method to discriminate the ground reaction forces without any loss of information. Copyright © 2017 Elsevier Ltd. All rights reserved.
A dynamic load estimation method for nonlinear structures with unscented Kalman filter
NASA Astrophysics Data System (ADS)
Guo, L. N.; Ding, Y.; Wang, Z.; Xu, G. S.; Wu, B.
2018-02-01
A force estimation method is proposed for hysteretic nonlinear structures. The equation of motion for the nonlinear structure is represented in state space and the state variable is augmented by the unknown the time history of external force. Unscented Kalman filter (UKF) is improved for the force identification in state space considering the ill-condition characteristic in the computation of square roots for the covariance matrix. The proposed method is firstly validated by a numerical simulation study of a 3-storey nonlinear hysteretic frame excited by periodic force. Each storey is supposed to follow a nonlinear hysteretic model. The external force is identified and the measurement noise is considered in this case. Then a case of a seismically isolated building subjected to earthquake excitation and impact force is studied. The isolation layer performs nonlinearly during the earthquake excitation. Impact force between the seismically isolated structure and the retaining wall is estimated with the proposed method. Uncertainties such as measurement noise, model error in storey stiffness and unexpected environmental disturbances are considered. A real-time substructure testing of an isolated structure is conducted to verify the proposed method. In the experimental study, the linear main structure is taken as numerical substructure while the one of the isolations with additional mass is taken as the nonlinear physical substructure. The force applied by the actuator on the physical substructure is identified and compared with the measured value from the force transducer. The method proposed in this paper is also validated by shaking table test of a seismically isolated steel frame. The acceleration of the ground motion as the unknowns is identified by the proposed method. Results from both numerical simulation and experimental studies indicate that the UKF based force identification method can be used to identify external excitations effectively for the nonlinear structure with accurate results even with measurement noise, model error and environmental disturbances.
Desert Talons: Historical Perspectives and Implications of Air Policing in the Middle East
2009-04-01
predominant role in support of a smaller ground force has historical precedent. During the 1920s, the Royal Air Forces (RAF) air control method adhered...the Royal Air Force’s (RAF) air control method adhered to the concepts of the inverted blockade, minimum force, precision targeting, and force...owing to its process of rapid communications, Air Methods are, in short, the reverse of the old punitive column. Our policy is one of prevention
An Optimization Method for the Reduction of Propeller Unsteady Forces.
1988-02-01
unsteady forces and the determination of skew distribulee has been developed. The current method provides an efficient propeller design tool capable...62633N HM35 SF33321 DN305 123 11. TITLE (ft .WC*i=iW) An Optimization Method for the Reduction of Propeller Unsteady Forces 12. PERSONAL AUTHOR(S) T.S...of determining a variety of cubic or quadratic skew distributioms, subject to constraints, which minimize the unsteady forces produced by the various
A New Approach in Force-Limited Vibration Testing of Flight Hardware
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Kern, Dennis L.
2012-01-01
The force-limited vibration test approaches discussed in NASA-7004C were developed to reduce overtesting associated with base shake vibration tests of aerospace hardware where the interface responses are excited coherently. This handbook outlines several different methods of specifying the force limits. The rationale for force limiting is based on the disparity between the impedances of typical aerospace mounting structures and the large impedances of vibration test shakers when the interfaces in general are coherently excited. Among these approaches, the semi-empirical method is presently the most widely used method to derive the force limits. The inclusion of the incoherent excitation of the aerospace structures at mounting interfaces has not been accounted for in the past and provides the basis for more realistic force limits for qualifying the hardware using shaker testing. In this paper current methods for defining the force limiting specifications discussed in the NASA handbook are reviewed using data from a series of acoustic and vibration tests. A new approach based on considering the incoherent excitation of the structural mounting interfaces using acoustic test data is also discussed. It is believed that the new approach provides much more realistic force limits that may further remove conservatism inherent in shaker vibration testing not accounted for by methods discussed in the NASA handbook. A discussion on using FEM/BEM analysis to obtain realistic force limits for flight hardware is provided.
A hierarchical Bayesian method for vibration-based time domain force reconstruction problems
NASA Astrophysics Data System (ADS)
Li, Qiaofeng; Lu, Qiuhai
2018-05-01
Traditional force reconstruction techniques require prior knowledge on the force nature to determine the regularization term. When such information is unavailable, the inappropriate term is easily chosen and the reconstruction result becomes unsatisfactory. In this paper, we propose a novel method to automatically determine the appropriate q as in ℓq regularization and reconstruct the force history. The method incorporates all to-be-determined variables such as the force history, precision parameters and q into a hierarchical Bayesian formulation. The posterior distributions of variables are evaluated by a Metropolis-within-Gibbs sampler. The point estimates of variables and their uncertainties are given. Simulations of a cantilever beam and a space truss under various loading conditions validate the proposed method in providing adaptive determination of q and better reconstruction performance than existing Bayesian methods.
Estimating Tool–Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool
Zhao, Baoliang; Nelson, Carl A.
2016-01-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool–tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool–tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool–tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool–tissue interaction forces in real time, thereby increasing surgical efficiency and safety. PMID:27303591
Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.
Zhao, Baoliang; Nelson, Carl A
2016-10-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.
Stabilization of computational procedures for constrained dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1988-01-01
A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.
Lateral-deflection-controlled friction force microscopy
NASA Astrophysics Data System (ADS)
Fukuzawa, Kenji; Hamaoka, Satoshi; Shikida, Mitsuhiro; Itoh, Shintaro; Zhang, Hedong
2014-08-01
Lateral-deflection-controlled dual-axis friction force microscopy (FFM) is presented. In this method, an electrostatic force generated with a probe-incorporated micro-actuator compensates for friction force in real time during probe scanning using feedback control. This equivalently large rigidity can eliminate apparent boundary width and lateral snap-in, which are caused by lateral probe deflection. The method can evolve FFM as a method for quantifying local frictional properties on the micro/nanometer-scale by overcoming essential problems to dual-axis FFM.
Updates on Force Limiting Improvements
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Scharton, Terry
2013-01-01
The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.
NASA Astrophysics Data System (ADS)
Kumar, Harish
The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.
MEMS piezoresistive cantilever for the direct measurement of cardiomyocyte contractile force
NASA Astrophysics Data System (ADS)
Matsudaira, Kenei; Nguyen, Thanh-Vinh; Hirayama Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao
2017-10-01
This paper reports on a method to directly measure the contractile forces of cardiomyocytes using MEMS (micro electro mechanical systems)-based force sensors. The fabricated sensor chip consists of piezoresistive cantilevers that can measure contractile forces with high frequency (several tens of kHz) and high sensing resolution (less than 0.1 nN). Moreover, the proposed method does not require a complex observation system or image processing, which are necessary in conventional optical-based methods. This paper describes the design, fabrication, and evaluation of the proposed device and demonstrates the direct measurements of contractile forces of cardiomyocytes using the fabricated device.
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review
NASA Astrophysics Data System (ADS)
Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng
2017-11-01
The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.
One-Channel Surface Electromyography Decomposition for Muscle Force Estimation.
Sun, Wentao; Zhu, Jinying; Jiang, Yinlai; Yokoi, Hiroshi; Huang, Qiang
2018-01-01
Estimating muscle force by surface electromyography (sEMG) is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs) in two steps: (1) learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2) extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Uccellini, L. W.
1983-01-01
In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.
Force estimation from OCT volumes using 3D CNNs.
Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander
2018-07-01
Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.
Localization and force analysis at the single virus particle level using atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chih-Hao; Horng, Jim-Tong; Chang, Jeng-Shian
2012-01-06
Highlights: Black-Right-Pointing-Pointer Localization of single virus particle. Black-Right-Pointing-Pointer Force measurements. Black-Right-Pointing-Pointer Force mapping. -- Abstract: Atomic force microscopy (AFM) is a vital instrument in nanobiotechnology. In this study, we developed a method that enables AFM to simultaneously measure specific unbinding force and map the viral glycoprotein at the single virus particle level. The average diameter of virus particles from AFM images and the specificity between the viral surface antigen and antibody probe were integrated to design a three-stage method that sets the measuring area to a single virus particle before obtaining the force measurements, where the influenza virus was usedmore » as the object of measurements. Based on the purposed method and performed analysis, several findings can be derived from the results. The mean unbinding force of a single virus particle can be quantified, and no significant difference exists in this value among virus particles. Furthermore, the repeatability of the proposed method is demonstrated. The force mapping images reveal that the distributions of surface viral antigens recognized by antibody probe were dispersed on the whole surface of individual virus particles under the proposed method and experimental criteria; meanwhile, the binding probabilities are similar among particles. This approach can be easily applied to most AFM systems without specific components or configurations. These results help understand the force-based analysis at the single virus particle level, and therefore, can reinforce the capability of AFM to investigate a specific type of viral surface protein and its distributions.« less
Advances in Quantum Mechanochemistry: Electronic Structure Methods and Force Analysis.
Stauch, Tim; Dreuw, Andreas
2016-11-23
In quantum mechanochemistry, quantum chemical methods are used to describe molecules under the influence of an external force. The calculation of geometries, energies, transition states, reaction rates, and spectroscopic properties of molecules on the force-modified potential energy surfaces is the key to gain an in-depth understanding of mechanochemical processes at the molecular level. In this review, we present recent advances in the field of quantum mechanochemistry and introduce the quantum chemical methods used to calculate the properties of molecules under an external force. We place special emphasis on quantum chemical force analysis tools, which can be used to identify the mechanochemically relevant degrees of freedom in a deformed molecule, and spotlight selected applications of quantum mechanochemical methods to point out their synergistic relationship with experiments.
A Novel Two-Velocity Method for Elaborate Isokinetic Testing of Knee Extensors.
Grbic, Vladimir; Djuric, Sasa; Knezevic, Olivera M; Mirkov, Dragan M; Nedeljkovic, Aleksandar; Jaric, Slobodan
2017-09-01
Single outcomes of standard isokinetic dynamometry tests do not discern between various muscle mechanical capacities. In this study, we aimed to (1) evaluate the shape and strength of the force-velocity relationship of knee extensors, as observed in isokinetic tests conducted at a wide range of angular velocities, and (2) explore the concurrent validity of a simple 2-velocity method. Thirteen physically active females were tested for both the peak and averaged knee extensor concentric force exerted at the angular velocities of 30°-240°/s recorded in the 90°-170° range of knee extension. The results revealed strong (0.960
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1987-01-01
Low-speed experimental force and data on a series of thin swept wings with sharp leading edges and leading and trailing-edge flaps are compared with predictions made using a linearized-theory method which includes estimates of vortex forces. These comparisons were made to assess the effectiveness of linearized-theory methods for use in the design and analysis of flap systems in subsonic flow. Results demonstrate that linearized-theory, attached-flow methods (with approximate representation of vortex forces) can form the basis of a rational system for flap design and analysis. Even attached-flow methods that do not take vortex forces into account can be used for the selection of optimized flap-system geometry, but design-point performance levels tend to be underestimated unless vortex forces are included. Illustrative examples of the use of these methods in the design of efficient low-speed flap systems are included.
La Delfa, Nicholas J; Potvin, Jim R
2017-03-01
This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r 2 = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r 2 = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Moving Force Identification: a Time Domain Method
NASA Astrophysics Data System (ADS)
Law, S. S.; Chan, T. H. T.; Zeng, Q. H.
1997-03-01
The solution for the vertical dynamic interaction forces between a moving vehicle and the bridge deck is analytically derived and experimentally verified. The deck is modelled as a simply supported beam with viscous damping, and the vehicle/bridge interaction force is modelled as one-point or two-point loads with fixed axle spacing, moving at constant speed. The method is based on modal superposition and is developed to identify the forces in the time domain. Both cases of one-point and two-point forces moving on a simply supported beam are simulated. Results of laboratory tests on the identification of the vehicle/bridge interaction forces are presented. Computation simulations and laboratory tests show that the method is effective, and acceptable results can be obtained by combining the use of bending moment and acceleration measurements.
Real-time cartesian force feedback control of a teleoperated robot
NASA Technical Reports Server (NTRS)
Campbell, Perry
1989-01-01
Active cartesian force control of a teleoperated robot is investigated. An economical microcomputer based control method was tested. Limitations are discussed and methods of performance improvement suggested. To demonstrate the performance of this technique, a preliminary test was performed with success. A general purpose bilateral force reflecting hand controller is currently being constructed based on this control method.
Multiple-mode nonlinear free and forced vibrations of beams using finite element method
NASA Technical Reports Server (NTRS)
Mei, Chuh; Decha-Umphai, Kamolphan
1987-01-01
Multiple-mode nonlinear free and forced vibration of a beam is analyzed by the finite element method. The geometric nonlinearity is investigated. Inplane displacement and inertia (IDI) are also considered in the formulation. Harmonic force matrix is derived and explained. Nonlinear free vibration can be simply treated as a special case of the general forced vibration by setting the harmonic force matrix equal to zero. The effect of the higher modes is more pronouced for the clamped supported beam than the simply supported one. Beams without IDI yield more effect of the higher modes than the one with IDI. The effects of IDI are to reduce nonlinearity. For beams with end supports restrained from axial movement (immovable cases), only the hardening type nonlinearity is observed. However, beams of small slenderness ratio (L/R = 20) with movable end supports, the softening type nonlinearity is found. The concentrated force case yields a more severe response than the uniformly distributed force case. Finite element results are in good agreement with the solution of simple elliptic response, harmonic balance method, and Runge-Kutte method and experiment.
Zhang, Suoxin; Qian, Jianqiang; Li, Yingzi; Zhang, Yingxu; Wang, Zhenyu
2018-06-04
Atomic force microscope (AFM) is an idealized tool to measure the physical and chemical properties of the sample surfaces by reconstructing the force curve, which is of great significance to materials science, biology, and medicine science. Frequency modulation atomic force microscope (FM-AFM) collects the frequency shift as feedback thus having high force sensitivity and it accomplishes a true noncontact mode, which means great potential in biological sample detection field. However, it is a challenge to establish the relationship between the cantilever properties observed in practice and the tip-sample interaction theoretically. Moreover, there is no existing method to reconstruct the force curve in FM-AFM combining the higher harmonics and the higher flexural modes. This paper proposes a novel method that a full force curve can be reconstructed by any order higher harmonics of the first two flexural modes under any vibration amplitude in FM-AFM. Moreover, in the small amplitude regime, short range forces are reconstructed more accurately by higher harmonics analysis compared with fundamental harmonics using the Sader-Jarvis formula.
The analysis of cable forces based on natural frequency
NASA Astrophysics Data System (ADS)
Suangga, Made; Hidayat, Irpan; Juliastuti; Bontan, Darwin Julius
2017-12-01
A cable is a flexible structural member that is effective at resisting tensile forces. Cables are used in a variety of structures that employ their unique characteristics to create efficient design tension members. The condition of the cable forces in the cable supported structure is an important indication of judging whether the structure is in good condition. Several methods have been developed to measure on site cable forces. Vibration technique using correlation between natural frequency and cable forces is a simple method to determine in situ cable forces, however the method need accurate information on the boundary condition, cable mass, and cable length. The natural frequency of the cable is determined using FFT (Fast Fourier Transform) Technique to the acceleration record of the cable. Based on the natural frequency obtained, the cable forces then can be determine by analytical or by finite element program. This research is focus on the vibration techniques to determine the cable forces, to understand the physical parameter effect of the cable and also modelling techniques to the natural frequency and cable forces.
Yan, Yifei; Zhang, Lisong; Yan, Xiangzhen
2016-01-01
In this paper, a single-slope tunnel pipeline was analysed considering the effects of vertical earth pressure, horizontal soil pressure, inner pressure, thermal expansion force and pipeline—soil friction. The concept of stagnation point for the pipeline was proposed. Considering the deformation compatibility condition of the pipeline elbow, the push force of anchor blocks of a single-slope tunnel pipeline was derived based on an energy method. Then, the theoretical formula for this force is thus generated. Using the analytical equation, the push force of the anchor block of an X80 large-diameter pipeline from the West—East Gas Transmission Project was determined. Meanwhile, to verify the results of the analytical method, and the finite element method, four categories of finite element codes were introduced to calculate the push force, including CAESARII, ANSYS, AutoPIPE and ALGOR. The results show that the analytical results agree well with the numerical results, and the maximum relative error is only 4.1%. Therefore, the results obtained with the analytical method can satisfy engineering requirements. PMID:26963097
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
A Method for Implementing Force-Limited Vibration Control
NASA Technical Reports Server (NTRS)
Worth, Daniel B.
1997-01-01
NASA/GSFC has implemented force-limited vibration control on a controller which can only accept one profile. The method uses a personal computer based digital signal processing board to convert force and/or moment signals into what appears to he an acceleration signal to the controller. This technique allows test centers with older controllers to use the latest force-limited control techniques for random vibration testing. The paper describes the method, hardware, and test procedures used. An example from a test performed at NASA/GSFC is used as a guide.
A parametric symmetry breaking transducer
NASA Astrophysics Data System (ADS)
Eichler, Alexander; Heugel, Toni L.; Leuch, Anina; Degen, Christian L.; Chitra, R.; Zilberberg, Oded
2018-06-01
Force detectors rely on resonators to transduce forces into a readable signal. Usually, these resonators operate in the linear regime and their signal appears amidst a competing background comprising thermal or quantum fluctuations as well as readout noise. Here, we demonstrate a parametric symmetry breaking transduction method that leads to a robust nonlinear force detection in the presence of noise. The force signal is encoded in the frequency at which the system jumps between two phase states which are inherently protected against phase noise. Consequently, the transduction effectively decouples from readout noise channels. For a controlled demonstration of the method, we experiment with a macroscopic doubly clamped string. Our method provides a promising paradigm for high-precision force detection.
Experimental estimation of energy absorption during heel strike in human barefoot walking.
Baines, Patricia M; Schwab, A L; van Soest, A J
2018-01-01
Metabolic energy expenditure during human gait is poorly understood. Mechanical energy loss during heel strike contributes to this energy expenditure. Previous work has estimated the energy absorption during heel strike as 0.8 J using an effective foot mass model. The aim of our study is to investigate the possibility of determining the energy absorption by more directly estimating the work done by the ground reaction force, the force-integral method. Concurrently another aim is to compare this method of direct determination of work to the method of an effective foot mass model. Participants of our experimental study were asked to walk barefoot at preferred speed. Ground reaction force and lower leg kinematics were collected at high sampling frequency (3000 Hz; 1295 Hz), with tight synchronization. The work done by the ground reaction force is 3.8 J, estimated by integrating this force over the foot-ankle deformation. The effective mass model is improved by dropping the assumption that foot-ankle deformation is maximal at the instant of the impact force peak. On theoretical grounds it is clear that in the presence of substantial damping that peak force and peak deformation do not occur simultaneously. The energy absorption results, due the vertical force only, corresponding to the force-integral method is similar to the results of the improved application of the effective mass model (2.7 J; 2.5 J). However the total work done by the ground reaction force calculated by the force-integral method is significantly higher than that of the vertical component alone. We conclude that direct estimation of the work done by the ground reaction force is possible and preferable over the use of the effective foot mass model. Assuming that energy absorbed is lost, the mechanical energy loss of heel strike is around 3.8 J for preferred walking speeds (≈ 1.3 m/s), which contributes to about 15-20% of the overall metabolic cost of transport.
Method and apparatus for shaping and enhancing acoustical levitation forces
NASA Technical Reports Server (NTRS)
Oran, W. A.; Berge, L. H.; Reiss, D. A.; Johnson, J. L. (Inventor)
1980-01-01
A method and apparatus for enhancing and shaping acoustical levitation forces in a single-axis acoustic resonance system wherein specially shaped drivers and reflectors are utilized to enhance to levitation force and better contain fluid substance by means of field shaping is described.
Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.
Force feedback in a piezoelectric linear actuator for neurosurgery.
De Lorenzo, Danilo; De Momi, Elena; Dyagilev, Ilya; Manganelli, Rudy; Formaglio, Alessandro; Prattichizzo, Domenico; Shoham, Moshe; Ferrigno, Giancarlo
2011-09-01
Force feedback in robotic minimally invasive surgery allows the human operator to manipulate tissues as if his/her hands were in contact with the patient organs. A force sensor mounted on the probe raises problems with sterilization of the overall surgical tool. Also, the use of off-axis gauges introduces a moment that increases the friction force on the bearing, which can easily mask off the signal, given the small force to be measured. This work aims at designing and testing two methods for estimating the resistance to the advancement (force) experienced by a standard probe for brain biopsies within a brain-like material. The further goal is to provide a neurosurgeon using a master-slave tele-operated driver with direct feedback on the tissue mechanical characteristics. Two possible sensing methods, in-axis strain gauge force sensor and position-position error (control-based method), were implemented and tested, both aimed at device miniaturization. The analysis carried out was aimed at fulfilment of the psychophysics requirements for force detection and delay tolerance, also taking into account safety, which is directly related to the last two issues. Controller parameters definition is addressed and consideration is given to development of the device with integration of a haptic interface. Results show better performance of the control-based method (RMSE < 0.1 N), which is also best for reliability, sterilizability, and material dimensions for the application addressed. The control-based method developed for force estimation is compatible with the neurosurgical application and is also capable of measuring tissue resistance without any additional sensors. Force feedback in minimally invasive surgery allows the human operator to manipulate tissues as if his/her hands were in contact with the patient organs. Copyright © 2011 John Wiley & Sons, Ltd.
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less